FOR DYNAMICALLYRECONFIGURABLE
MULTIMEDIA APPLICATIONS
Scott Mitchell, Hani Naguib,
George Coulouris and Tim Kindberg
Distributed Systems LaboratoryDepartment of Computer ScienceQueen Mary and Westfield College
University of London
{scott,hanin,george,timk}@dcs.qmw.ac.uk
The use of multimedia in distributed systems has begun to include such
complex and mission-critical domains as digital television production, ‘video-on-demand’ services, medical and security systems. These applications impose morestringent requirements on the support mechanisms provided by underlying networks andoperating systems than most currently deployed continuous media applications. Thispaper describes the DJINN multimedia programming framework, which is designed tosupport the construction and dynamic reconfiguration of distributed multimediaapplications. We motivate the benefits of a runtime model of the quality of service andother characteristics of multimedia applications, and demonstrate a generic algorithmfor scheduling dynamic reconfigurations that maintains QoS guarantees. QoScharacteristics are modelled as piecewise-linear or quadratic relations, which are solvedusing standard constraint programming techniques. During reconfigurations, updates toactive components are scheduled so as to maintain temporal constraints on the mediastreams. We illustrate our approach using experimental results from a real-worldapplication domain.
Abstract:
Keywords:Components, multimedia, quality of service, dynamic reconfiguration.
1
2SESSION X: SESSION NAME
1 INTRODUCTION
The use of multimedia—or more particularly continuous, real-time media streams—in distributed systems has begun to include such complex and mission-criticaldomains as digital television production, ‘video-on-demand’ services, medicalapplications and security systems. Because of the enrichment they bring toapplication content we believe that this trend will continue and that more and moredistributed mission-critical applications will begin to incorporate continuous mediadata. These applications impose more stringent requirements on the supportmechanisms provided by underlying networks and operating systems than currentlymore widely deployed continuous media applications such as videoconferencing,streaming audio and video on the Internet and (non-distributed) entertainmentsoftware. The quality of the media being presented is important—sometimescritically so—and thus resources must be properly allocated and scheduled in orderto preserve this quality. The following three scenarios illustrate some of the problemsthat will need to be addressed by an application framework for the construction ofmission-critical multimedia applications:
1. Digital TV studio. The production of a digital TV newscast is likely to include:
incoming live news footage in a variety of formats; the use of archive materialfrom several sites and in different formats; a news reader (anchor) interviewingremote subjects; frequent changes of programme source on-the-fly. Theconstruction of a system to support such a demanding set of real-time activitieswhile maintaining a continuously high quality of service seems well beyond thecapacity of today's digital multimedia platforms.2. Distributed surgery. A distributed conferencing system could support a
medical team undertaking a transplant operation. The scarcity of specialistsmakes it necessary to support remote participation in surgical and otherprocedures. A transplant operation might involve two patients (donor andrecipient) undergoing concurrent operations in separate rooms with otherspecialist consultants participating remotely. Additional channels would provideremote monitoring of patients, remote manipulation of surgical probes, etc.These would also require strong QoS guarantees and consistency constraints.The reliability and quality of service in such an application may be life-critical.3. Remote surveillance. A video surveillance system for a major public event (e.g.
a political party congress) incorporates a control room accessing the majority ofavailable video and audio sources, but with other agencies supplying andreceiving additional streams of information in a variety of formats via land linesand radio. Some of the sources and destinations of audio and video streams aremobile with variable bandwidth and connectivity. Some of the key requirementsare to keep certain audio and video channels open to mobile users, to switchtransmission links in response to communication failures, and to upgrade thequality of service in order to provide closer observation in response to suspiciousincidents.Applications such as these are often long-lived and subject to frequentreconfiguration and long-term evolution of application structure. The application
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA3
software that supports them must be highly adaptable and be capable of tolerating awide variety of reconfigurations and extensions while still meeting their Quality ofService (QoS) guarantees.
This paper describes the DJINN multimedia programming framework [13], whichis designed to support the construction and dynamic reconfiguration of distributedmultimedia applications. The main requirements addressed by DJINN are to provideQoS and integrity guarantees for complex multimedia applications, both in theirsteady state and during reconfigurations. In particular, DJINN includes:
S Programming support for distributed multimedia applications. This includes
the means to encapsulate potentially complex configurations of multimedia-processing components, and to abstract away from the details of hardware.
S Dynamic reconfiguration. The requirement is to support dynamic changes to
complex component structures, such as when users join and leave groupwaresessions. These changes to the application’s structure need to be performedatomically, and the application’s structural integrity must be maintained—forexample, ensuring that the media formats handled by interconnectedcomponents are compatible with one another.
S Support for QoS negotiation, admission control and the specification of
integrity constraints. This support is available to concurrent applications thatcan alter their QoS characteristics (e.g. audio quality) at run-time. The QoSsupport in Djinn provides an environment for adaptable multimediaapplications to rapidly converge into a sustainable level of quality.
The rest of this paper is structured as follows. Section Figure 2 is an overview of theDJINN architecture. Section 3 presents an illustrative example of a real applicationbuilt in Djinn and demonstrates our approach to QoS management and dynamicreconfiguration. Section 4 briefly reviews some related research while Section 5contains a summary and conclusions.2 FRAMEWORK ARCHITECTURE
DJINN applications are constructed from networks of components consuming,producing and transforming media data streams and interconnected via their ports, ina similar fashion to other distributed multimedia programming frameworks such as[2], [8] & [9]. Our approach to meeting the requirements outlined above is basedaround the use of a dynamic runtime model of the application, which models theQoS, structural configuration and integrity properties of the application. The modelis itself built from interconnected components, so that DJINN applications have atwo-level structure as shown in Figure 1. The active components of an applicationare autonomous objects that produce, consume and transform multimedia datastreams. Active components are distributed so as to meet the processingrequirements of the application—in general, they must be co-located with themultimedia hardware that they control. On the other hand, model components do notdirectly process media data and can be located wherever is convenient for theapplication user or programmer. The model may be distributed, for example in a
4SESSION X: SESSION NAME
Model ComponentsVideo PlayerVideoSourceComp.Network ConnectorNetworkSourceNetworkSinkDecomp.DisplayInvocationsStreamsMedia Elements & EventsComponentsPortsEventsHost XVideoSourceComp.NetworkSourceNetworkNetworkSinkDecomp.Host YDisplayActive ComponentsFigure 1. Model and active components.
video-server system where the server and clients are under the control of differentpeople or organisations.
The model components of an application are arranged in a tree-structuredhierarchy, where the leaves of the tree are atomic model components, eachcorresponding to a single active component (for example, the Video Source andDisplay components in Figure 1). Atomic model components export a commoninterface to their underlying active components, such that all “Camera” componentswill offer a common set of operations irrespective of the physical type of cameracontrolled by the active component. Additionally, atomic model components modelthe QoS characteristics of their underlying active components as sets of linear andquadratic relations between attributes—such as frame rate and size—of the mediastreams being processed. These relations include the resource requirements of theactive component and any constraints it imposes on the media streams. Theconnectivity of the active layer is mirrored by the atomic model components: eachhas the same set of ports and inter-component connections as its active counterpart.The interior nodes of the model component tree are composite components. Thesecomponents do not correspond to any one active component; rather, they encapsulatea sub-tree of the application model, with the composite component at the root.Composite components facilitate high-level application structuring and addadditional behaviour to an application by providing operations to manipulate theirencapsulated sub-components. For instance, a video-conferencing component wouldprovide operations to add and remove conference participants. A compositecomponent models the connectivity of its encapsulated sub-tree as a directed graphthat can be expanded down to the atomic component level. The root compositecomponent (the Video Player in Figure 1) also stores a cost-benefit function, whichexpresses the application’s specific resource/QoS trade-offs.
Application integrity is modelled by sets of predicates attached to modelcomponents. Predicates range from simple checks on atomic components—such asensuring that output ports are only ever connected to input ports—to complexconsistency tests on high-level composite components—a video-conferencing
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA5
component should maintain full connectivity between all participants as well asenforcing a floor-control policy. The predicates are evaluated in leaf-to-root order,and all must be true for the application’s configuration to be considered valid. Thebottom-up ordering allows a composite component further up the tree to declare theconfiguration invalid when it fails to meet a condition unknown to the sub-components.
Application programmers are unaware of the distinction between model andactive components. All application-level programming in Djinn takes place at themodel layer. Active components are created, configured and destroyed as requiredunder the control of the application model. Components are controlled through acombination of remote invocations and inter-component events. Events can betransferred between components and additionally may flow along the same paths asmedia streams, interleaved with media data elements. Events enable heterogeneouscomponents to respond to state changes; they also allow us to synchronisereconfigurations with media data flow.
Our primary motivation for the use of an application model is to clearly separatethe design of an application from its realisation at run-time [13]. The model is largelyindependent of location, hardware platform, operating system and the varioustechnologies used to process and transport media data; it enables programmers tobuild and evolve applications at a high level of abstraction. Active components, onthe other hand, have no notion of their place in the larger application—they simplycarry out their tasks of producing, processing, transporting and consumingmultimedia data.
MultimediaApplicationsModelComponentsReconfigurationManagerReconfigurationQoSSchedulerManagerActiveComponentsResourceManagersReal-TimeOS(Chorus)SystemResourcesFigure 2. DJINN runtime architecture.
Figure 2 shows the relationships between the main components of the Djinnruntime architecture. The QoS and resource managers provide QoS managementsupport, including admission control and resource allocation. The reconfigurationmanager is responsible for controlling and validating changes to the application
6SESSION X: SESSION NAME
model; the reconfiguration scheduler maps approved changes onto the activecomponent layer.
DJINN’s QoS guarantees depend upon appropriate real-time support from hostoperating systems and networks. We have a real-time testbed system comprising aset of hosts running the Chorus/ClassiX RTOS [10] and a dedicated Ethernet. Activecomponents on the Chorus hosts are implemented in C++ while the modelcomponents—which do not require a real-time platform—are implemented in Java.CORBA is used for inter-component control communication; media streams useprotocols appropriate to the stream type and the underlying network.3 AN ILLUSTRATIVE EXAMPLE
In this section we analyse an application scenario similar to that described byYeadon et. al. in [22], who are developing systems to provide mobile multimediasupport and applications for the emergency services. The setting is a large security-conscious site—such as a factory or research centre—equipped with fixedsurveillance cameras feeding video to one or more central servers. Securitypersonnel can monitor the live video streams via either fixed workstations or mobileterminals communicating over a WaveLAN wireless broadcast network [20]. Mobileusers who move outside the coverage area of the WaveLAN are still able to receivevideo over a GSM cellular link [17], albeit with significantly reduced quality. In theevent of a major incident—say a factory fire—where the emergency services arecalled, the surveillance video streams can be routed to the police/fire brigade controlroom over a high-speed wired link. Relevant streams will then be forwarded toemergency units en route to the scene, again using a GSM connection or dedicatedpacket-radio network. Once on the scene, emergency services personnel should beable to receive the higher-quality video available from the WaveLAN at the incidentsite. If audio streams are also available, they can be treated in the same way. A high-level view of this scenario is shown in Figure 3.
Clearly this system is subject to frequent reconfiguration as video streams from
WaveLAN
CamerasMainServerGSMRemoteServerFixedClientHigh-speedNetworkFixedClientWaveLANMobileClients
Figure 3. The example application.
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA
Host XP2Comp.P1VideoSourceMPEG7
Host YS2Wave-LANSourceWaveLANWave-LANSinkS3MPEGDecomp.P3S1S4P4DisplayWaveLAN ConnectorFigure 4. Initial configuration.
different sources are switched between the different networks. One of the key
requirements of the application domain is for high levels of availability anddependability of data [22]. This implies a need for seamless switching betweennetwork transports at the client end, and careful control of resource usage, especiallyin highly constrained environments such as the GSM network.
For the purposes of illustration we will consider just one aspect of this applicationwith particular relevance to DJINN: a single mobile video unit that joins the system,then moves from the local WaveLAN to a dialup GSM link. This allows us toaddress two important aspects of DJINN: First, the admission control mechanismsthat allow a new client to join the application with an appropriate guaranteed QoSlevel; and, second, the algorithms used to schedule a smooth hand-over between thetwo networks with minimum disruption to the output seen by the user. The initialstate of this system is shown in Figure 4.3.1 Application Setup
Programmers build DJINN applications by creating and interconnecting modelcomponents. Before the active components are created and started the model mustpass through integrity tests—as described in Section 2—and an admission test. Thesetests aim to find an application configuration which does not break any of itsconstraints and for which enough system resources can be reserved. As an exampleof the former, the main video server in the surveillance application can support afixed maximum number of GSM connections, determined by the number of attachedmodems. Any configuration of the model that exceeds this limit must be rejected.Admission Test. Each admission test utilises the application’s QoS model, and isperformed in three stages: to gather application-imposed constraints, to determineconstraints on resources, and to generate a solution using a cost-benefit analysis. Inthe first stage components are asked to provide a list of their QoS characteristics(Table 1), expressed as simple numerical relations. This includes the amount ofresources required by each component along with any constraints imposed by thesecomponents on the streams they process. Consider the remote surveillance exampleshown in Figure 4. The Video Source component imposes the constraint S1.rate @ 30due to its frame-rate limitations. The constraint S5 J5 imposed by Display is user-specified and ensures that the displayed video will have a frame rate of at least 5frames per second. The MPEG Encoder also imposes constraints on the frame sizes
8SESSION X: SESSION NAME
it can produce. Note that to simplify this discussion Table 1 shows only the CPUrequirements of components; other resources are treated in a similar fashion.
The QoS characteristics of components are stored within individual modelcomponents. The component programmer specifies inter-stream constraints when shecreates the component. Our approach to modelling the resource requirements hasbeen to perform direct measurements of these values. We are currently developing atest-harness, which provides the modeller with information related to thecomponent’s resource utilisation characteristics. The user wishing to model thecomponent inputs multimedia elements of known attributes (for example, video ofknown frame rate and size). The harness measures the resource usage. Currently, wemeasure CPU, memory and network utilisation. We provide a tool for the user tomatch the resultant data points to linear functions or piecewise linear functions.Sometimes they are functions of products of attributes (for example, frame size timesframe rate)—and so we obtain a quadratic function of attributes. Anothercomplication is that resource utilisation may depend on media values. For example,an MPEG decoder may take differing amounts of time to decode two frames of thesame type (I, P or B) and size. We therefore can derive several linear or quadraticrelations, corresponding, in the case of MPEG, to video of differing classifications[18] (e.g. streams with low level of motion, computer generated animations etc).
Table 1. QoS Characteristics.
ComponentVideoSourceMPEGEncoder
ConstraintsS1.rate d 30
(S1.x = 128, S1.y =96) or(S1.x = 176, S1.y = 144) or(S1.x =352, S1.y = 176) or(S1.x = 704, S1.y = 575) or(S1.x = 1408, S1.y = 1152)S2 = S3 (all attributes)
ResourceCPU at X
Requirement (ms/sec)6.46x10-4S1.rate*S1.size1.61x10-4S1.rate*S1.size
WaveLANConnectorMPEGDecoderDisplay
CPU at XCPU at Y
8.07x10-5S1.rate*S1.size8.07x10-5S1.rate*S1.size1.08x10-3S1.rate*S1.size3.22x10-4S1.rate*S1.size
S3 = S4 (all attributes)S4.rate t 5
120 d S4.width d 70480 d S4.height d
CPY at YCPU at Y
In the second stage of the admission test, relevant resource managers are askedabout the availability of their resources. The components’ resource requirementfunctions are turned into a set of inequalities (one for each resource) which expressthe bound on the resources that can be used by the application. This allows thecurrent resource availability to be expressed within the model. This is shown inTable 2.
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA9
The third stage of the admission test attempts to solve the constraint relations. Wecurrently use techniques borrowed from operations research used in optimisationproblems. These techniques utilise a benefit function (in our case the application-specific cost-benefit function) to find optimum values for a set of variables (thestream attributes) given a set of constraints (the stream and resource constraints). Forour example we use a cost-benefit f=w1*S4.rate + w2*S4.size + w3*(RcpuX +RcpuY).This is a weighted function (the weights are w1 w2 and w3) of the frame rate and size(which we want to maximise) and the total resource utilisation (which we want tominimise). We use w1 = w2 = 106 and w3 = 1 to express the relative importance ofgood QoS over resource costs.
These numerical relations are then solved at run-time with the application’sbenefit function to determine an optimum QoS state. In this example this has a framerate of 10fps and a frame size of 352x176. This reflects the limited CPU resourceavailability at host Y. At present we use a freely available linear solver, which limitsor models to one stream attribute. We are currently evaluating other more general-purpose solvers, which do not have this restriction.
Table 2. Resource constraints.
ResourceCPU at XCPU at Y
CPU Availability (ms/sec)800920
Resource Constraint
8.877x10-4S4.rate*S1.size @ 8001.482x10-3S4.rate*S1.size @ 920
3.2 Dynamic Reconfiguration
We now consider the problem of reconfiguring the system in response to a userrequest or changes in the operating environment of the program. An example of thelatter occurs when the mobile handset moves outside the range of the WaveLAN—ifvideo playback is to continue the application must be reconfigured to deliver thevideo data over the lower-bandwidth GSM network
Application configuration—and reconfiguration—is expressed in terms of paths:model layer end-to-end management constructs describing the media data flowbetween a pair of endpoints chosen by the application. A path encapsulates anarbitrary sequence of ports and intervening components that carry its data. It declaresthe end-to-end QoS properties of that sequence, including latency, jitter and errorrate. It is up to each individual application to identify the end-to-end flows that are ofinterest to it and specify paths accordingly. Flows that are not part of a path do notreceive any end-to-end guarantees either for their normal operation or duringreconfiguration.
A reconfiguration moves the application from one consistent state to another in anatomic manner. That is, if it is not possible to successfully perform all of the actionsrequired to execute the reconfiguration, then none of the actions will be performedand the application will remain in its initial state. The reconfiguration is initiallyenacted on the application model; no changes are made to any active componentsuntil the new configuration has been approved by the admission control mechanismand validated against any application-defined integrity constraints. If it turns out that
10SESSION X: SESSION NAME
the requested changes cannot be successfully applied, the model components are‘rolled back’ to their previous consistent state, leaving the application configurationunchanged.
The continuous media streams processed by the active components haveconstraints that must be maintained during the transition between the initial and finalconfigurations. For example, it would not generally be acceptable for the arrival of anew mobile handset in the system to disrupt the video playback on other handsets.Therefore, we apply an ordering or schedule to the active component updates, tomaintain the temporal consistency of streams across reconfiguration boundaries, arequirement we have informally named the ‘smoothness’ condition [14]: “Theexecution of a reconfiguration on a live system must not break any temporalconstraint of any active path.”
The schedule ensures that the streams will be free of, or at least not unacceptablyaffected by, ‘glitches’. Glitches are lost data or loss of synchronisation, whichappears to users as frozen frames, silences or unsynchronised sound and vision.
In our example, the WaveLAN infrastructure is able to detect a change in signalstrength indicating that the user is moving outside the coverage area of the network[7],[15]. When this occurs, an event is delivered to the application model causing itto initiate a hand-over to the GSM network. We assume that the WaveLAN canprovide sufficient advance notice of an impending loss of service that we can havethe GSM link fully up and running in time for a seamless hand-over. The reducedbandwidth of a GSM link (only 9600 bits/s) necessitates a reduction in frame rateand a switch to a more efficient—but lower quality—H.236 codec [5]. Figure 5shows the final state of the path undergoing the reconfiguration (cf. the initialconfiguration in Figure 4).
The temporal constraints on this reconfiguration are:4. That the interval between the arrival at P4 of the last frame from the initial
configuration and the first frame from the final configuration is less than 200ms.5. That the play-out times of these two frames should not differ by more than
400ms, i.e. no more than two frames lost or repeated.Deriving the Schedule. Table 3 shows the latencies and startup times for thecomponents in both configurations, where the latter is the time required to get anewly created active component into a state where it is ready to process media data.This is particularly relevant to this example, since the GSM network componentshave startup times three orders of magnitude greater than their operating latency.
VideoSourceP1GSM Connector’P2’P3P4GSMGSMSinkH.263Decomp.DisplayS1H.263Comp.S2GSMSourceS4S3Host XHost YFigure 5. Final configuration.
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA11
While the startup delay cannot be avoided, it is possible to reduce or eliminate itsimpact in the relatively common case that the application receives some advancewarning of the need to reconfigure. The achieve this, we divide the active componentupdates into two phases:
6. Setup. This phase encompasses the creation of new active components and
reservation of their resources. The initial configuration remains operationalthroughout. However, some of the new components may be started running ifthe smoothness requirements of the reconfiguration demand it.7. Integrate. This phase is started by an event delivered after the end of the setup
phase—in our remote surveillance example this event arises when the signalstrength reaches a lower threshold. It completes the transition to the finalconfiguration according to a schedule computed to maintain the temporalconstraints of the reconfiguration.
Table 3. Component latencies.
ComponentVideo SourceDisplayMPEG EncoderMPEG DecoderH.263 EncoderH.263 DecoderWaveLAN SourceWaveLAN SinkGSM SourceGSM Sink
Latency (ms)4020100672001005555
Startup time (ms)500100100010001000100010010050005000
Each active component is ‘primed’ during the setup phase with the actions toperform during integration. The actions are triggered by receipt of an event from anexternal source or on an input port; the event is also propagated downstream alongthe reconfiguration path. Integration is thus performed by scheduled delivery ofintegrate events to the farthest upstream points of the reconfiguration.
The scheduling algorithm works upstream along both versions of the path fromP4, summing the latencies of each component encountered. When the configurationsconverge again at port P1, the differences in latencies along each path allows us tocalculate when the last MPEG and first H.263 frames should be delivered to ports P2and P2’ respectively. Thus, for the frames to arrive simultaneously at P4, we shouldinject the ‘start’ event into P2’ 133ms before sending the ‘stop’ event to P2. We maystretch or compress this schedule by up to 200ms and still meet the first constraint.Because the difference in the latency of the two configurations is less than 400ms,the second constraint is also maintained.
12SESSION X: SESSION NAME
Dynamic Admissions. The above schedule assumes that sufficient resources arereserved, by a dynamic admission test that is part of the atomic action. Dynamicadmission tests are slightly different from the initial admission test explained above.The major difference is that these tests must take into account the period during thetransition from the initial configuration to the final configuration, where componentsfrom both configurations may be executing concurrently. We thus perform twoadmission tests, one for the final configuration and one for the transitional period.Dynamic admission tests use the initial state of the model when looking for asolution to the final configuration. The techniques used are similar to those found insensitivity analysis [12] and can greatly increase the performance of these tests.Furthermore components and resource managers that are not affected by thereconfiguration need not be consulted since their information is already present inthe model. This is particularly useful since in many cases it is the QoS characteristicsof just a few localised components that are affected. Table 4 shows the time taken toperform admission control calculation with and without re-use of previouscalculations.
Table 4. Speedup from calculation reuse.
Number of relations22018605100
Complete recalculation (sec)0.202.0011.00
Re-using calculations (sec)0.020.180.70
4 RELATED WORK
The component-based approach to application construction is used by a variety ofmultimedia programming frameworks, such as that of Gibbs & Tsichritzis [9],Medusa [21] and CINEMA [2]. CINEMA also makes use of composite componentsand a separate ‘model’ of the application that is used for control and reconfiguration.However, CINEMA’s idea of what constitutes a reconfiguration is quite limited andhas no equivalent of the ‘smoothness’ property for ensuring clean transitionsbetween consistent states. It does allow inter-stream dependencies to be taken intoaccount when performing admission control, but it requires application componentsto be created from the outset in order to provide information about constraints, ratherthan using a separate model. Also, the application components individually attemptto reserve resources during the admission test. This can lead to admission failing,even in situations where sufficient resources might be found.
The need for smoothness support in the real-world domain of digital television—where there is a requirement to “splice” together MPEG streams within the resourceconstraints of hardware decoders whilst still meeting QoS guarantees—is illustratedby [4]. In [19], Sztipanovitz, Karsai and Bapty present a similar two-level approachto component-based application composition in the context of a signal-processingsystem whose applications share many of the real-time requirements of multimedia.
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA13
The use of a QoS model can also be found in the Quorum project [6]. They modelthe structural and QoS characteristics of applications and use benefit function tocapture user preferences, although they do not consider smoothness properties.5 SUMMARY AND CONCLUSIONS
This paper has motivated the benefits of a runtime model of the quality of serviceand structural integrity characteristics of multimedia applications. It has alsodemonstrated an algorithm for scheduling dynamic reconfigurations which maintainsQoS guarantees. QoS characteristics are modelled as piecewise-linear or quadraticrelations, which are solved using standard constraint programming techniques. Theresult is a negotiation between the application and the system, with user-configurablebounds. During reconfigurations, updates to active components are scheduled so asto maintain temporal constraints on the media streams. A generic software solvercomputes the schedule. We have illustrated our approach using preliminaryexperimental results from a real-world application domain.
A number of issues remain unresolved regarding the utility of our approach. It isnot yet clear that resource requirements can always be modelled accurately aspiecewise linear or quadratic functions, or that the model is sufficiently generic to betransparently reused in different application domains. In the example presented inthis paper we have made some simplifications (in addition to considering only CPUresources). In particular the cost-benefit function should express trade-offs betweenvarious streams and between the quality of the application versus its resourcerequirements. Furthermore, compressed streams would have attributes related to thecompression parameters, allowing for further trade-offs between stream quality andresource usage to be expressed.
Likewise, our reconfiguration scheduling algorithm is only fully developed for thesingle path case—we are still exploring the issues that arise when reconfiguringmultiple paths with inter-path dependencies. With reference to the requirementsoutlined in Section 1, this paper has addressed the reconfiguration and QoS aspects.Further details of DJINN can be found in [13] and our approaches to reconfigurationscheduling and application integrity management appear in [14],[16].References
[1] M.P. Atkinson, L. Daynès, M.J. Jordan, T. Printezis & S. Spence. “An
Orthogonally Persistent Java.” ACM SIGMOD Record 25(4), December 1996.[2] Ingo Barth. “Configuring Distributed Multimedia Applications Using
CINEMA.” Proc. IEEE Workshop on Multimedia Software Development(MMSD’96), Berlin, Germany, March 1996.
[3] Luc Bellissard & Michel Riveill. “Olan: A Language and Runtime Support for
Distributed Application Configuration.” Journées du GDR du Programmation,Grenoble, France, November 1995.
[4] Bhavesh Bhatt, David Birks & David Hermreck. “Digital Television: Making it
Work.” IEEE Spectrum 34(10), pp 19–28, October 1997.
[5] G. Bjontegaard. “Very Low Bitrate Videocoding using H.263 and Foreseen
Extensions.” Proc. European Conference on Multimedia Applications, Services
14SESSION X: SESSION NAME
and Teachniques (ECMAST ’96), Louvain-la-Neuve, Belgium, pp 825–838,May 1996.
[6] S. Chatterjee, J. Sydir, B. Sabata & T. Lawrence. “Modeling Applications for
Adaptive QoS-base Resource Management.” Proc. 2nd IEEE High-AssuranceSystem Engineering Workshop (HASE97), August 1997.
[7] Nigel Davies & Adrian Friday. “Applications of Video in Mobile
Environments.” IEEE Communications, June 1998.
[8] Halldor Fosså & Morris Sloman. “Implementing Interactive Configuration
Management for Distributed Systems.” Proc. 4th International Conference onConfigurable Distributed Systems (CDS’96), Annapolis, Maryland, USA, pp44–51, May 1996.
[9] Simon J. Gibbs & Dionysios C. Tsichritzis. Multimedia Programming: Objects,
Frameworks and Environments. Addison-Wesley, Wokingham, England, 1995.[10] M. Guillemont. “CHORUS/ClassiX r3 Technical Overview.” Chorus Systems
Technical Report, May 1997.
[11] T. Härder & A. Reuter. “Principles of Transaction-Oriented Database
Recovery.” ACM Computing Surveys 15(4), 1983.
[12] F.S. Hillier & G.J. Lieberman. Introduction to Operations Research. McGraw-Hill International Editions, New York, USA, 1995.
[13] Scott Mitchell, Hani Naguib, George Coulouris & Tim Kindberg. “A
Framework for Configurable Distributed Multimedia Applications.” 3rdCabernet Plenary Workshop, Rennes, France, April 1997.
[14] Scott Mitchell, Hani Naguib, George Coulouris & Tim Kindberg. “Dynamically
Configuring Multimedia Components: A Model-based Approach.” Proc. 8thSIGOPS European Workshop, Sintra, Portugal, pp 40–47, September 1998.
[15] José M. F. Moura, Radu S. Jasinschi, Hirohisa Shiojiri & Jyh-Cherng Lin.
“Video Over Wireless.” IEEE Personal Communications 3(1), pp 44–54,February 1996.
[16] Hani Naguib, Tim Kindberg, Scott Mitchell & George Coulouris. “Modelling
QoS Characteristics of Multimedia Applications.” Proc. 13th IEEE Real-TimeSystems Symposium (RTSS ’98), Madrid, Spain, December 1998.
[17] M. Rahnema. “Overview of the GSM System and Protocol Architecture.” IEEE
Communications Magazine 31(4), pp 92–100, April 1993.
[18] K. Shen, L.A. Rowe & E.J. Delp. “A Parallel Implementation of an MPEG-1
Encoder: Faster than Real-Time.” Proc. SPIE Digital Video Compression:Algorithms and Techniques, San Jose, CA, USA, February 1995.
[19] Janos Sztipanovits, Gabor Karsai & Ted Bapty. “Self-Adaptive Software for
Signal Processing: Evolving Systems in Changing Environments withoutGrowing Pains.” Communications of the ACM 41(5), pp 66–73, May 1998.
[20] Bruce Tuch. “Development of WaveLAN, an ISM Band Wireless LAN.” AT&T
Technical Journal 72(4), pp 27–37, July/August 1993.
[21] Stuart Wray, Tim Glauert & Andy Hopper. “The Medusa Applications
Environment.” Technical Report 94.3, Ollivetti Research Limited, Cambridge,England, 1994.
[22] Nicholas Yeadon, Nigel Davies, Adrian Friday & Gordon Blair. “Supporting
Video in Heterogeneous Mobile Environments.” Proc. Symposium on AppliedComputing, Atlanta, GA, USA, February 1998.
QOS SUPPORT FOR DYNAMICALLY RECONFIGURABLE MULTIMEDIA15
Biography
Prof. George Coulouris has been Professor of Computer Systems in the Departmentof Computer Science at QMW since 1978. He is co-investigator on the Mushroomand Djinn projects and the ongoing ESPRIT PerDiS project. Prof. Coulouris was aninvited keynote lecturer at OZCHI ’96.
Dr. Tim Kindberg is a Senior Lecturer in the Department of Computer Science atQMW. He is principal investigator on the EPSRC-funded Mushroom project and co-investigator on the Djinn project. Dr. Kindberg is co-author of the book DistributedSystems: Concepts and Design along with Prof. Coulouris and Jean Dollimore.
Scott Mitchell is a Ph.D. candidate in the Department of Computer Science atQMW. He received the BCMS and MCMS degrees from the University of Waikatoin 1994 and 1995 respectively. His research interests include reconfigurabledistributed systems, adaptive middleware systems and multimedia.
Hani Naguib is a Ph.D. candidate in the Department of Computer Science at QMW.He received the BSc. from the American University of Cairo in 1994 and the MSc.from QMW in 1995. His research interests include distributed and real-time systems,operating system support for multimedia, and quality-of service.
因篇幅问题不能全部显示,请点此查看更多更全内容