Towards an Active Network Architecture
https://dl.acm.org/doi/10.1145/1290168.1290180
What problem does this paper address
Traditional data network transfers data between end hosts with modification and is insensitive to the data it carries.
Limited computation is enabled at the intermediate routers (e.g. header processing, signaling, etc).
Applications and users have pressing needs for the transparent interposition of computation within the network
e.g. firewalls, web proxies and other services (i.e. DNS, multicast routers), and mobile / nomadic gateways.
This paper thus proposes active networks, a highly programmable networks that allow intermediate switches / routers to perform customized computations on the user data that is passing through them.
Provide flexibility in applications and computations (I think this is the #1 benefit)
It does talk about mobility, safety, efficiency, but I think that's all how you implement this without compromising network performances; btw there's no real evaluations on end-to-end system performance, rare discussions related to this too
Do you believe the problem is / was important
MOTIVATION
1) The paper starts with a section explaining a series of leading applications that can benefit by the in-network computation capability enabled by active networks.
Trend is more functionality in the network
Circuit switch --> packet switch : Little computation --> header processings
2) Availability of active technologies
I.e. mechanisms that allow users to inject customized programs into shared resources (e.g. routers, switches, servers)
Provide for safe execution by restricting the set of primitive actions available to mobile programs and the scope of their operands (e.g. access to storage and other resources)
Program encoding approaches (source, intermediate, platform dependent binary)
3) Rising processing power and bandwidth
OTHER BENEFITS
Later on the paper argues that the flexibility of in-network computation
accelerates the pace of innovations
enables the deployment of even greater computational power at the edges
What does edges mean here?
enables a range of new applications
leads to broader implications on how we think about network and their protocols and on the infrastructure innovation process
I.e. Layered approach --> component-based approach
I.e. HW-SW bundled --> "virtualized approach" where HW and SW are decoupled
What is the authors' main thought, what is the solution
The authors propose a vision of an active network infrastructure that can be programmed by users
They envision that the use of active networks will enable a range of new applications and have broader implications on potentially innovating the network infrastructure
The general approach synthesizes a number of technologies including programmable node platforms, component-based software engineering, and code mobility (i.e. enabled by the availability of "active technologies")
Introduce the notion of "capsules"
Bits arriving on incoming links are processed by a fashion that identifies capsule boundaries
Capsule's contents are dispatched to transient execution env to be safely evaluated
Programs
are composed of"primitive" instructions that perform basic computations on the capsule contents
can invoke external "methods"
provide access to resources external to the transient environment
Execution of a capsule --> 0 or more capsules for transmission on the outgoing link
Might change the non-transient state of the node
Transient env is destroyed with capsule evaluation terminates
Do you think the solution is a good one
Is the solution good? positive / negative sides
Pros
Provides use-cases illustrations and how these applications can benefit by the active network vision
Examines some problems of the traditional network infra that need to be addressed given the trend (i.e.
Propose an approach to solve this problem
Use some of the available "active technologies" under the network context to achieve goals like safety and efficient code mobility
Cons
All the computations (i.e. verifications etc.) regarding the safety and security concerns seem to add overheads to the shared resources (i.e. routers, switches). Are the intermediate routers have enough computational / storage resources to satisfy the needs, and operate fast enough?
How does this approach scale in a general-purpose networking environment? Is the approach suited for large-scale internet?
Is the current mechanism really secure and safe? How does the mechanism react given a malicious program?
Programs might have different computational needs and routers might also have different resource capabilities, how does the resource allocation work in this scenario? the paper only rarely mentioned this, it only talks a bit about default uniform resource allocation
Other comments / thoughts
How does active network (mainly its working mechanism) differ from what we refer to as "programmable network" today? Generally curious about how does the hardware changes and how does that affect some of the design decisions we have back then, compared to design to enable in-network computations today
A bit confused by 3.2: why execution of a capsule is able to change the non-transient state of the node?
A bit hard to understand the discrete v.s integrated approach, are there example applications for each scenario
Last updated
Was this helpful?