1. Introduction
Pervasive mobile applications are growing and their complexity is increasing dramatically. As a consequence, their maintainability and adaptability are becoming a challenging task. Moreover, in such systems, there is an increasing usage of different and heterogeneous mobile devices, as well as captors transmitting data (IoT). They are highly connected and can be used for different services, such as to monitor, to analyze and to display information to users. Therefore, data management and adaptation in real time are becoming challenging tasks. Context management is a key element for deriving semantically-rich context insights about mobile users (high-level adaptation task, preferences, intentions) from low level measurements (location, type of activity, etc.) to their online multimodal interactions, or more compellingly, from a combination of these. We argue here that users’ mobility, users’ situations (e.g., activity, location, time) and the limited resources of mobile devices (e.g., battery lifetime) need to ensure the service continuity on mobile devices.
Today, the field of smart-* (home, city, health, tourism, etc.) is highly multimedia oriented by nature; contents are heterogeneous; and it lacks a smart way to manage various modalities according to the current users’ needs, usage situations and execution context. As a consequence, mobile applications certainly exist, but most often are inadequate according to users’ expectations and, more precisely, the instant expectations. Moreover, the massive use of new technologies has led to a dramatic multiplication of a wide range of mobile applications, different usages and a huge amount of information. Using many different devices (home and professional PCs, set-top boxes, smartphones, etc.) make the user quite confused. This could imply that he or she would need to install a number of applications on various devices even if it is only just for short-lived interactions. That will provoke a huge multiplicity of applications (to install/uninstall/update), configurations and redundant context and user profiles. Therefore, it becomes mandatory to find a dynamic, intelligent way that can manage multiple devices at the same time. These devices need to communicate regardless of the difference of hardware/software. Our goal is to provide the suitable services and interactive applications in a transparent way for the user, according to his or her needs (personal, health, social and professional) and to his or her current context.
In order to design smart context-aware mobile applications, we need to exploit semantic web technologies, as well as the Kalimucho middleware [
1], where the mobile application qualities are managed in an efficient way according to user needs and available context sources. Our proposed platform is implemented using an ontology-based approach. This ontology captures a shared conceptual schema common in location in specific application domains, such as tourism, healthcare, transport, sport, etc., and maintains semantic quality information in heterogeneous service providers for the service model. Our application will run on a flexible, extendable semantic model that will be able to evolve at every moment without any intervention of the users. We extend our previous knowledgebase set on a high semantic level on a cloud architecture [
2] to include full distributed situation management in heterogeneous environments. To capture and characterize situations, we scan their environments (sensors and smart devices) and reason upon context changes in which multimodal and distributed behavioral adaptation is required. Our goal is to manage transparently all of the functionalities and additional modules that users may require in an ideal situation not available nowadays.
Our proposed platform is implemented using an ontology-based approach. This ontology maintains semantic quality information in heterogeneous service providers. We particularly focus on context-aware e-health mobile applications. To keep capturing the context in an efficient way, we need to have a platform able to monitor and handle continuous context changes. The distributed/centralized context monitor and event manager collect and manage any important information that could be important for the context regardless of its source and store it in a database. Then, it is represented and inferred within this context using our ontology and centralized context reasoner in order to deduce the current situations that are reported to the service controller. The latter is responsible for selecting the appropriate quality service to the user according to the inferred situations according to our strategy [
3]. The execution is based on Kalimucho [
1]. Such middleware allows a dynamic (re-)deployment strategy of services. Kalimucho is a platform that allows dynamic reconfiguration of applications on desktops, laptops and mobile devices. However, this platform is not currently focusing on the service selection and the service prediction, and then, it does not allow providing the appropriate service to the user.
Our main objectives are to extend the Kalimucho platform with a new layer called Autonomic Semantic Service Adaptation Controller and Reconfiguration (ASSACR) in order to: (1) dynamically monitor usage resources and user constraint changes among heterogeneous network protocols and mobile platforms (laptop, smartphone, etc.); (2) provide a centralized/distributed semantic multimodality event detection in order to manage relevant context information that could be important for the context regardless of its source; (3) provide a distributed action mechanism that will give the application the flexibility and dynamicity; (4) provide centralized semantic adaptation decision making to achieve the efficient adaptation of decisions; (5) find and select relevant semantic services for several heterogeneous mobile devices, many cloud services for a full usage multimodality and connected mobiles devices being shared every time; (6) maximize redundancy relays and switching mobile services; (7) autonomic optimization of the response time of situation matching under the criteria’s priority (location, time, category). Our platform uses semantic technologies and the concept of multi-devices’ context data representation to facilitate a seamless and interactive media service in a common contextual mobile environments.
Section 2 deals with related works in context-aware software platforms and possible types of adaptations.
Section 3 focuses on our contribution, i.e., the ASSACR framework. In this section, we will detail our smart semantic-based context-aware service selection strategy.
Section 4 describes our adaptation platform architecture, i.e., the Kali-Smart platform.
Section 5 validates our proposal, and
Section 6 concludes the paper with some future works.
2. Related Works
The first related area of research is some platforms involving the adaptations of component-based applications referring to the evolving needs of the users and the execution context by exploiting event-condition-action rules (e.g., WComp [
4], MUSIC [
5], OSGi [
6], Kali2Much [
7], Kalimucho [
1]).
MUSIC [
5] is the most well-known autonomous platform supporting self-adaptative mobile and context-aware applications. This platform can be adapted to the dynamic changes of the environment (e.g., location, network connectivity) in order to satisfy the user requirements and device properties (battery, memory, CPU). The adaptation process defined in MUSIC is based on the principles of planning-based adaptation. This work has not taken into account the multimodal aspects for user-machine interaction and the contextual information that can be gathered by bio-sensors and the distributed action mechanism. WComp [
3] proposed a powerful framework for adapting multimodal mobile services in a pervasive computing environment by constructing a private mobile service adaption agent for each mobile user in the cloud. The main drawback of such a platform is the distributed adaptation action mechanism and some smart multimodal event detection mechanisms (Usb (Universal Serial Bus) inputs/outputs, social inputs, etc.).
Recently, Da et al. [
7] have proposed a context management middleware Kali2Much to provide services dedicated to the management of distributed context at the semantic level on the shared domain. This work offered excellent smart service management and a predefined policies’ deployment strategy, but disagrees in the user-defined policies and did not consider the prediction of user context changes.
Another interesting work is SenSocial [
8], which defines middleware for integrating online social networks and mobile sensing data streams. This middleware is based on social networks for capturing and filtering context data. It proposes a generic solution to manage and aggregate context streams from multiple remote devices. This work provides richer contextual information from online social networks and did not take into account the semantics of services and that of the category, QoS and context constraint.
More recently, Taing et al.’s [
9] work was based on the Context Toolkit infrastructure; it supports the change of XML files and fire events to an unanticipated adaptation component that can be associated to fully described situations, including time, place and other pieces of context. This work uses a transaction mechanism to ensure uniformly-consistent behavior for every smart object executing inside a transaction and supports only a notification as an action type without multimodality aspects that can be triggered as a result of situation identification and smart event detection.
The second related areas of research are multimodal adaptation projects [
10,
11,
12,
13]. However, these works lack a new way (i.e., using a detection function to detect modality (touch, gesture, voice) to activate it in another device) to respond to the user needs in a dynamic intelligent way.
Roberto Yus et al. [
10] proposed a system that processes user requests continuously to provide up-to-date answers in heterogeneous and dynamic contexts according to the locations of the mobile users to offer customized information. Ontology techniques are used to share knowledge among devices, which enables the system to guide the user to select the service that best fits his/her needs in the given context. This work is efficient and shared between different services using OWL as the best candidate to describe and format it. However, SHERLOCK does not support multimodal standards for representing the application-specific semantics of user input.
Etchevery et al. [
11] intended to focus on the Visual Programming Language (VPL), which allows designers to specify the interactions between the system and users who have a minimum computer-science background. The power of VPLs is the possibility of interpreting the meaning of diagrams in order to automatically generate executable code; each interaction is described by a diagram specifying the user action initiating the interaction, as well as the system reactions. VPL could be profitable for touristic or educational purposes. However, this work lacks the semantic expressiveness and efficient context management.
Primal Pappachan et al. [
12] proposed (Rafiki: a Semantic and Collaborative approach to Community Health-Care in Underserved Areas) a system for mobile computing devices, which guides community health workers through the diagnostic process and facilitates collaboration between patients and healthcare providers. Interactions in community healthcare could be done by the Internet-based approach or a peer-2-peer (P2P) approach. Semantic context rules specify the desired reactive behavior and can be manually defined by designers or generated by applications.
To improve the interactions between machine and users, Joyce Chai et al. [
13] have developed a tool called (Responsive Information Architect) RIA: A Semantics-based Multimodal Interpretation Framework for conversational Systems. The idea is whatever the use input (text, gesture, voice), the input should be interpreted according to the user situation and desires. The work proposed an event-condition-action rule specification that satisfies the functional requirements of covering the five semantic dimensions and handling various data collected from sensor devices and smartphone middleware, as well as supporting composite contexts.
All of the above described related works provide a mechanism for dealing with the inherent heterogeneity and complexity of ubiquitous environment. Hence, to compare to our proposal (see
Table 1), they do not: (1) provide a distributed action mechanism with smart multimodality aspects and a service prediction strategy that will give the application the flexibility and dynamicity needed to run through the user’s environment, which, to our knowledge, have not been proposed yet in this field; (2) support migration of context middleware components (i.e., event detection, situation reasoner, action mechanism) in a transparent and uniform way; and (3) distributed/centralized context monitor and semantic event detection in order to manage relevant information that could be important for the context regardless of its source. In addition, we implement a centralized semantic context reasoner making the decision a centralized process that will be handled by the main host (e.g., cloud, server, etc.). This choice is meant to prevent the redundancies of adaptation decisions.
The combination of the Kalimucho middleware [
1], mobile computing and IoT, with ontologies and rules-based-approaches, provides a new design approach for context-aware healthcare systems. In addition, the context model that we have proposed allows not only representing and reasoning about contextual information, but also providing a generic and flexible model to the developer that facilitates modeling the context and developing context-aware system.
We extended our previous works [
3] by allowing a user to define his or her preferences to generate a primary context configuration as desired with a distributed action mechanism with multimodality aspects and a service prediction strategy. We propose an autonomic and dynamic service management that monitors, analyzes, plans and optimizes the service latency delay and maximizes service reliability. ASSACR (Autonomic Semantic Service Adaptation Controller and Reconfiguration), provides an automatic discovery of equivalent multimodal services/duplicated adaptation paths by analyzing the execution context and makes relevant semantic services for multi-heterogeneous mobile devices and many more one-cloud services for all full use, connected mobiles devices and mobile objects being shared every time.
4. Ontology and Rules-Based Context Model
4.1. Context Modeling
The main objective of our approach is to improve the efficiency and accuracy of users’ adaptations tasks. This objective is achieved through finite sets of semantic relevant adaptation services and various users’ contexts. A user has a context in which he or she wishes to adapt his or her multimedia documents within a specific activity in a known time and location using one of the offered modalities; any smart service can be used in a local way or using the cloud, which allows him or her to handle the data storage that he or she needs to run his or her applications.
In order to facilitate the conception and the development of our ontology, we divide it into four hierarchical levels: (1) Contextual Information level; (2) Contextual Situations level; (3) Contextual Services level; (4) Contextual Constraint level. These levels contain seven main classes, which are: Context class, Event class, Situation class, Service class, Context Constraint Class, ContextProperty class, ContextPropertyLogicValue class. These classes represent generic concepts, which can be used in any pervasive context-aware distributed mobile application that aims to provide appropriate services to the user according to the current situations.
4.1.1. Context Sub-Ontology
We divided the context sub-ontology into seven sub-contexts (see
Figure 3).
The RessourceContext describes the current state of the hard equipment and soft equipment (memory size, CPU speed, battery energy, etc.).
The
User Context describes information about the user, which can alter the adaptation service. User preferences include user age, preferred languages and preferred modalities (voice, gesture, pen click, mouse click, etc.). A user can select which multimedia object can be adapted (image, text, video or audio), for example if he or she receives audio when he or she is at work, he or she would rather receive a text instead; that means we need an adapting service to change the audio to a text. We can find also a description of the user’s health situation, as the user can be healthy or handicapped (
Figure 4).
The
Smart-Object Context describes data that are gathered from different sensors and describe orchestrated acts of a variety of actuators in smart environments (
Figure 5). We have three types of sensor data: (1) bio-sensor data represent data that are captured by bio-sensors, like blood pressure, blood sugar and body temperature; (2) environmental sensor data represent data that are captured by environmental sensors, like home temperature, humidity, etc.; (3) device sensor data represent data that are captured by sensors, like CPU speed, battery energy, etc.
The
QoS Context describes the quality of any mobile-based application, which can be represented in our ontology, which is defined as a set of metadata parameters. These QoS parameters are: (1) continuity of service; (2) durability of service; (3) speed and efficiency of service; and (4) safety and security (see
Figure 6).
The
Host Context represents different hosts of services proposed by providers. For example, services can be hosted on local devices or on the cloud. The local device class contains information about fixed devices or mobile devices. Mobile devices have limited resources, such as battery, memory and CPU. The Cloud class contains information about the cloud server (e.g., Google cloud) that can be used for hosting services. The service is deployed and migrated on the host, and as the service has constraints, so a substitution of the service location could occur (a possible scenario is when the battery level is low, the service should be migrated on the cloud, so that the data could be stored separately and that could help minimize the use of energy). As mobile limited resources can break the mobile services, we are looking to the cloud or resources in proximity as a way to ensure the service continuity on mobile devices (see
Figure 7).
The
Environment Context describes spatial and temporal information (
Figure 8):
- -
Temporal information can be a date or time used as a timestamp. Time is one aspect of durability, so it is important to date information as soon as it is produced.
- -
The Place describes related information about the user’s location {longitude, altitude and attitude}, in a given location, where we can find available mobile resources. The mobile resources are mobile devices, such as tablets, smartphones, laptops and smart objects, such as bio-sensors, environment sensors and actuators, etc. Resources are accessible by users.
- -
The ActivityContext: according to a schedule, a user can engage in a scheduled activity.
The
Document Context describes the nature of the documents (text, video, audio). The document context specifies a set of properties related to a specific media type: (1) text: alignment, font, color, format, etc.; (2) image: height, width, resolution, size, format, etc.; (3) video: title, color, resolution, size, codec etc.; (4) sound: frequency, size, resolution, and format (
Figure 9).
4.1.2. Context Constraint Ontology
A context constraint is defined through the terms context parameters and context expression, which is further categorized by simple expression and/or composite expression, thus forming a multi-level context ontology as shown in
Figure 10.
Context_property: Each context category has specific context properties. For example, the device-related context is a collection of parameters (memory size, CPU power, bandwidth, battery lifecycle, etc.). Some context parameters may use semantically-related terms, e.g., CPU power, CPU speed.
Context_expression: denotes an expression that consists of the context parameter, logic operator and logic value. For instance: glucose level = ‘very low’.
Context_constraint: consists of simple or composite context expression. For example, a context constraint can be IF glucose level =‘very high’ and Time= ‘Before Dinner’ and Location=‘Any’ Then Situation= Diabet_Type_1_Situation.
4.1.3. Service Ontology
Nowadays, environments are getting smarter in order to reply to user requests anytime and anywhere according to his or her location; interaction between users seeks to get better services from providers. Service could be either smart or interactive. Any smart service can be used in a local way or using the cloud, which allows them to handle the data storage that users need to run their applications. Interactive services are unimodal and multimodal interactions. A service can be executed in various forms with various quality of experience and quality of service. Each user expects his or her own QoS when using a service. To ensure the QoS level, the service has its specific mobile device constraints (size of memory, CPU speed and battery lifetime). We can find also three services types: Smart Service, Interactive Service and Adaptation Service (
Figure 11).
4.1.4. Context Property Sub-Ontology
Some context parameters can use semantically-related terms, such as processor speed. Each parameter is described by the
ContextPropertyLogicValue class to which is assigned a range of qualitative values (
Context Min Value,
Context Max Value); these values make it possible subsequently to determine the quantitative values (see
Figure 12).
4.1.5. Situation Ontology
This part represents the possible situations that we can define for each context. For instance, the home temperature context can have three situations: normal, cold, hot and very high situations. We call these situations Contextual Situations since they depend on contextual information. This class contains two subclasses:
External-Situation class and
Internal-Situation class. The first one represents situations that are related to the user’s environment and user’s devices, such as home temperature situations and battery situations. The
Internal-Situation class represents situations that are related to a specific domain, such as the person’s health state, like blood sugar situations and blood pressure situations. Each situation has data type properties (see
Figure 13), such as situation type, max value and a min value, that are defined by the developer and by the domain expert; in our work, the physician. An example of the situation rule is as follows.
4.2. Dynamic Situation Searching
There are several possible ways to identify a situation [
16,
17,
18,
19], and a proper identifying generic technique still has to be defined. Computing similarities is also quite difficult when integrating heterogeneous user profiles. Several services composing a situation are generally connected semantically. Our similarity measure extends the properties of
Sim defined in [
19]. It is formalized as follows:
where “
a” is the set of common concepts of
Qi (
current situation of user) and
Sj (profile
constraint) for the same domain and “
b” is the set of concepts of
Qi and not existing in
Sj.
sima is the atomic similarity between each context concept of situation
Q,
S. It is defined as a function that maps two concepts to the interval [0, 1]. First of all, local events have to be detected and local situations identified. If no events are detected, it should be detected whether events are in nearby mobile devices by knowing their exact situations and re-deploying distributed interactive services. The proposed algorithm (Algorithm 1) aims at matching the current situation with each constraint defined in the user’s profile. It takes a list of current situation’s concepts as input in order to calculate the atomic similarity
sima of each pair of concepts (of situation and constraint). If the match is not a
Fail, it returns the overall score
Sim of each constraints and appends the advertisement to the result set. Finally, the result set is returned in ranked way. In so doing, we select the
higher matching measure.
Algorithm 1: Situation Matching and Dynamic Service Selection |
Input: Profile [], profile Status[],two contexts C1 and C2/C1S and C2 Q // set of profiles and profile status |
Output: Overall score Sim (Q,S) // best semantically equivalent services |
Matched_Relation // EXACT, SUBSUMES, NEAREST-NEIGHBOUR, FAIL |
MatchedService_List // a set of services that meet the user’s context preferences |
1. Matching preferences with local services and get each constraint similarity value defined in [1]: |
2. Matching preferences with nearby services and get each constraint an overall similarity in [1]: |
3. Selection of quality equivalent semantic services defined in [11]. |
4. Generation of reconfiguration file with k-best services. |
5. Save new reconfiguration file in Kalimucho Server. |
4.3. Context Provisioning
In order to be successful, a context-aware multimodal mobile application must have a full visibility of the person’s context, including what, when and how to monitor, collect and analyze context data. In the case of changes according to the user context (e.g., less available memory, less processor power, change of user location), or the environment context (e.g., less bandwidth), or the user preferences, the ASSACR component will be provisioned by a set of contexts metadata in order to dynamically:
Provision the next substituted service of the list of services offering the same service functionalities, which require less available resources and sorting the QoS of available services.
Add new services into the list of found services. The new service is matched from the new provisioned changes in the profile constraint (e.g., the user context).
We consider each sub-context profile and its constraints. Solving a single user constraint in a low service space (local, neighbors) is easier and less time consuming. Therefore, we can start services discovering the optimum cost and good provisioning services chain by a combination of reconfiguration operations: create/update/migrate/remove from some context attributes changes with a single performance and approach the global optimum service reconfiguration chain.
Our approach divides the all of user context changes into several phases (Algorithm 2). One more preference (expressed as an Event-Condition-Action rule) is considered in each phase until the global reconfiguration chain is provisioned and generated. First, a set of services is discovered and matched with a certain concerned context conditions on some changing context adaptively, called Single-Increment-Context-Evolution. Secondly, we investigate the better quality services obtained from Step 1, and services from provisioning context attributes changes (low bandwidth, battery life time, etc.) are joined together in Step 2 to form an initial configuration, for which there are multi-concerned context constraints.
For an efficient and better provisioning and good management of the self-management adaptation process, we have used the “Poisson event-based” simulation model to predict context changes (i.e., mobility of users, usage resources), then we have provided the global system’s behavior that requires adaptation and overcoming the rising complexity of the context-aware system.
Our process predicts minimum adaptive cost in low adaptation time, to result in the best configuration, such that the system quality is maximized subject to device resource constraints and user preference constraints. Firstly, the platform will be able to restrict the scope of the search into the range of configurations, which differ from the current configuration only by the service at the origin of the reconfiguration event. However, when this approach does not give any solution, we look for a new configuration to use, starting by switching from a new relay or by moving a service to a suitable device at run-time. The ASSACR generates a provisioned configuration model. When the reconfiguration is triggered, we start-up the provisioned saved reconfiguration.
Algorithm 2: Incremental Context Changes Prediction |
Input: Predict user context changes |
Output: Generates provisioned Services reconfiguration fileSet p = 1; where p is the phase number; find a provisioned quality services list with regard to the first concerned context attribute change. Set p = p + 1. The next phase starts. Generate a fittest quality services list with regard to the p-th concerned context attribute change and joined with the p − -1th concerned context attributes. If remained context attributes changes, go to 2. Generation of provisioned reconfiguration file.
|
4.4. Context Reasoning
Context reasoning should take into consideration each user preference by virtue of giving him or her the best service according to the current location, environment constraints, schedule, etc. We have classified our rules into three categories;
Situation Rules,
Smart Service Rules and
Constraints Rules.
Situation Rules are used to infer situations from contextual information. Some of situations trigger appropriate services that must be provided to the user. However, other situations, like automatic gesture modality detection, can be used for proposing to assure the continuity of the service.
Table 2 illustrates examples of the rules.
6. Conclusions
An ideal future where our applications know what we need before we even have to ask is coming. The opportunity is becoming possible to provide more smart and pervasive multimodal services in a smart environment. These services allow users to live safely and inexpensively in their own smart environments. The complexity of pervasive systems in general and ubiquitous-health systems in particular is steadily increasing. In these systems, there is a growing variety of mobile computing devices, which are highly connective and can be used for different tasks, in particular for dynamically-changing environments. Those systems must be able to detect context information over time, which comes from different and heterogeneous entities, deduce new situations from this information and then adapt their behavior automatically according to the deduced situations.
In this paper, we have proposed our context platform, Kali-Smart. The main objective of this platform is the collection of contextual data that are captured directly by sensors. It supports distributed action mechanisms with an incremental context change prediction strategy, as well as automatically adapting data contents to users. It also provides its clients three types of Context Reasoners (Situation Reasoner, Action Reasoner and Prediction Reasoner) programmable to handle more complex client constraints. Moreover, Kali-Smart is supported by a generic and flexible API; it uses an ontology-based model for building a wide amount of applications in various domains. This API hides the complexity and heterogeneity of contextual smart devices in a ubiquitous environment.
In this type of environment, due to the mobility and the limited resources of mobile devices (e.g., battery lifetime), it is difficult to provide the appropriate services at the right time, in the right place and in the right manner. Consequently, we cannot disregard the importance of how the adaptive environments will be able to reason semantically about context information and adapt their behavior according to the dynamic changes of this context. Instead of provisioning context changes in advance, Kali-Smart offers a dynamic searching service that allows its clients to use the current context services available in a requested location.
Finally, our proposal is based on the combination of the Kalimucho middleware [
1], mobile computing and the Internet of Things with ontologies and rule-based approaches. This combination permits one to get most of their benefits for realizing a new autonomic adaptation approach for pervasive systems in general and pervasive healthcare systems in particular. Our approach allows: (1) supervising the system, detecting useful contextual information, reasoning about this information and adapting the behavior of the system according to the current context by providing the appropriate service to the user; (2) a distributed/centralized context monitor and semantic event detection in order to manage relevant information that could be important for the context regardless of its source; (3) a centralized semantic context reasoner and an incremental context prediction process making decisions that will be handled by the main host (e.g., computer, server, etc.); this choice is meant to prevent the redundancies of adaptation decisions; (4) the novelty and originality of our approach compared to previous related works by implementing a distributed action mechanism with multimodality aspects that will give the application the flexibility and dynamicity needed to run through the user’s smart environment, which, in our knowledge, has not been proposed yet in this field; and (5) maximize redundancy relays and switching mobile services. Compared to current related works, our method improves the matching accuracy greatly by considering the whole meaning of the context conditions while dramatically decreasing the time cost due to the centralized/distributed event detection. In future works, we intend to extend the context model with new concepts and to evaluate our architecture in more complex case study scenarios. Moreover, we will develop our mechanisms for the dynamic components’ reconfiguration, including the migration of context middleware from local mobile devices to the cloud.