Blog - mimik Technology Inc https://miphawell.com Your Competitive Edge Thu, 11 Jul 2024 19:26:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://miphawell.com/wp-content/uploads/2023/10/cropped-Favicon-mimik-Favicon-mimik-525 × 525-white-32x32.png Blog - mimik Technology Inc https://miphawell.com 32 32 On-Device Analytics and Observability: New Frontiers in the Data Landscape https://miphawell.com/on-device-analytics-pioneering-new-frontiers-in-the-data-landscape/ Tue, 18 Jun 2024 23:17:45 +0000 https://miphawell.com/?p=82784 Abstract: This article explores how mimik’s edgeEngine enhances the landscape of data analytics and observability, providing tools for more precise, timely, and actionable data insights. While not an analytics engine itself, mimik’s technology significantly improves data collection and preprocessing, serving as a robust foundation for deeper analytics processing by third-party systems or higher layers within […]

The post On-Device Analytics and Observability: New Frontiers in the Data Landscape first appeared on mimik Technology Inc.

]]>
Abstract:

This article explores how mimik’s edgeEngine enhances the landscape of data analytics and observability, providing tools for more precise, timely, and actionable data insights. While not an analytics engine itself, mimik’s technology significantly improves data collection and preprocessing, serving as a robust foundation for deeper analytics processing by third-party systems or higher layers within the architecture. With the increasing demand for offline-first capabilities, privacy, and security, mimik’s edgeEngine empowers analytics and observability solutions with advanced on-device processing. This approach enables the application of specific data policies, offers capabilities for staged analytics, and structures data before it is sent to different endpoints according to policy requirements. While edgeEngine does not directly deliver analytical summaries or recommendations, it empowers solution providers to build more powerful, secure, and responsive analytics systems, positioning them at the forefront of innovative data processing.

Introduction:

Imagine a world where devices are not just passively sending data but are active participants in real-time decision-making. The mimik Hybrid Edge Cloud (HEC) technology fulfills this vision. With mimik edgeEngine—a cloud-native software operating environment—applications can utilize the power of smart devices to perform real-time data processing directly on the device and securely expose the outcome via APIs to the rest of the system, ensuring that only those with the right credentials and permissions can access it. This includes providing capabilities such as an API gateway, AI agents, and microservices runtime. The mimik edgeEngine supports nearly all operating systems and can run on a variety of devices, including smartphones, tablets, cameras, microcontrollers, robots, and drones. This platform enables offline-first operations, reducing latency, ensuring data privacy, and minimizing reliance on cloud connectivity. By facilitating seamless integration and discovery of microservices workloads, including AI agents, mimik’s edgeEngine delivers a more responsive and efficient system architecture.

Transformative Power of On-Device Analytics

Imagine a car detecting a driver’s medical emergency through a wearable device. Instantly, on-device analytics processes the data, alerts healthcare services, informs emergency contacts, and coordinates with smart city traffic systems to ensure a clear path to the nearest hospital. This real-time, cross-domain system interaction exemplifies the transformative power of on-device analytics.

The Role of On-Device Analytics

On-device analytics processes data where it’s generated, offering significant advantages:

  • Reduced Latency: Immediate decision-making without cloud dependency.
  • Enhanced Privacy: Local data processing minimizes exposure.
  • Improved Efficiency: Real-time responses and reduced bandwidth usage.

The Role of AI in On-Device Analytics

Artificial Intelligence (AI) enhances on-device analytics by enabling systems to learn from data, identify patterns, and make informed decisions. AI agents can operate on end devices where most data is generated, allowing for real-time responses to dynamic conditions. This capability is crucial in scenarios where immediate action is required whether for safety-critical applications, industrial automation, personalized user experiences, or other situations where rapid decision-making is essential, with or without internet connectivity.

AI can enhance on-device analytics through:

  • Predictive Analysis: AI algorithms can predict potential issues before they occur, enabling preventive measures. For instance, AI can analyze health data from wearable devices to predict a possible cardiac event and alert medical professionals in advance.
  • Real-Time Decision Making: By processing data locally, AI agents can make instantaneous decisions. This is critical in a variety of industries, including industry 4.0, manufacturing, automotive, and healthcare, where immediate responses are essential for optimizing operations.
  • Personalization: AI can analyze user data to provide personalized experiences. In smart homes, AI can learn the habits and preferences of residents to optimize energy usage and enhance comfort.
  • Enhanced Security: AI can detect anomalies in data that might indicate security threats, allowing for immediate action to protect sensitive information.

How Analytics Works and mimik’s Impact

Analytics involves three primary phases: Data Collection, Transport, and Consumption.

Data Collection:

Data Collection is the first critical phase of analytics, encompassing logs, metrics, and traces. Logs capture specific events, providing a detailed account of what happens within the system. Metrics measure system performance at any given time, offering insights into the system’s operational health. Traces link these events and metrics to specific system components, helping to identify and attribute occurrences accurately. mimik enhances this phase by enabling devices with a microservice runtime and API gateways, significantly increasing observability. With edgeEngine, data can be collected at multiple levels—from the operating system and platform-specific APIs to user-permitted and application-level data—ensuring a comprehensive capture of system behaviour.

  • Logs: Capture specific events.
  • Metrics: Measure system performance.
  • Traces: Attribute events to system components.

mimik’s Added Value:

  • Increased Observability: By enabling devices with microservice runtime and API gateways, mimik significantly enriches data collection. This integration increases the number of data sources, capturing more granular details about system behavior.
  • Data Levels: mimik’s edgeEngine allows data collection at multiple levels (OS, platform without user permission, platform with user permission, and application level), ensuring comprehensive data capture directly from devices.
  • End-to-End View: Microservices running on smart devices act as active data sources, providing a holistic view of system performance and behavior.

Transport:

Once data is collected, it needs to be transported efficiently. This phase deals with both structured and unstructured data, optimizing it for transport. On-device processing can perform advanced filtering, reducing the volume of data transmitted over networks. mimik’s platform supports on-device microservices that perform machine learning-driven filtering, ensuring that only valuable data is sent. Additionally, on-device processing enhances the semantic value of data, improving its quality before it reaches central systems. By aggregating and buffering data optimally, mimik minimizes connection requests and reduces bandwidth usage, making data transport more efficient and cost-effective.

  • Data Types: Structured and unstructured data.
  • Optimization: On-device filtering and caching improve efficiency and reduce costs.

mimik’s Added Value:

  • Enhanced Filtering: On-device microservices can perform advanced filtering to reduce the data volume sent over the network. These filters can be machine learning-driven, ensuring only valuable data is transmitted.
  • Semantic Enhancement: On-device processing can enhance the semantic value of data, improving its quality before it reaches central systems.
  • Efficient Aggregation and Buffering: mimik’s platform allows for optimal aggregation and buffering, minimizing connection requests and reducing bandwidth usage.
  • Structured Data Transmission: Data can be structured and enriched according to policies (security, compliance), ensuring efficient and compliant data transport.

Consumption:

The final phase of analytics is data consumption, which can occur either in real-time at the source (early consumption) or further along the data stream for aggregated business intelligence (late consumption). mimik enables microservices to process data locally on smart devices, making it actionable in real-time. This local processing enhances the semantic value of data and reduces latency, ensuring that insights are available faster. Furthermore, results from central processing can be pushed back to devices, providing minimal latency and optimal performance. This approach not only improves efficiency but also ensures that data-driven decisions can be made swiftly and securely.

  • Early Consumption: Real-time decision-making at the source.
  • Late Consumption: Aggregated data for high-level business intelligence.

mimik’s Added value:

  • Local Processing: By enabling microservices on smart devices, mimik facilitates on-device data processing, making it actionable in real-time.
  • Enhanced Data Value: Local processing enhances the semantic value of data, whether for local or central consumption.
  • Reduced Latency: Results from central processing can be pushed to devices, ensuring minimal latency and optimal performance.

Enhancing Observability and Business Intelligence

On-device analytics aids in two crucial areas:

  1. Observability:
  • System Behavior: Provides precise metrics and traces, integrating devices as active system components.
  • Debugging and Optimization: Enhances understanding and management of system performance.

mimik’s Added Value:

  • Behavioral Insights: By treating devices as integral parts of the system, mimik enables better insights into system behavior, helping identify issues more accurately.
  • Integrated Debugging: Devices with compute capabilities become part of the system, improving debugging and optimization processes.
  1. Business Intelligence:
  • System Usage: Offers insights into user interactions and system refinement.
  • Targeting and Refinement: Improves delivery and consumption targeting based on real-time data.

mimik’s Added Value:

  • Enhanced Understanding: By integrating on-device microservices, mimik provides deeper insights into system usage and performance, improving business intelligence.

mimik’s edgeEngine Capabilities

mimik’s edgeEngine provides a runtime environment for microservices, enabling real-time context and decision-making directly on devices. This leads to:

  • Accurate Context: Captures real data at the source, enhancing system reliability.
  • System Integration: Treats devices as integral parts of the system, not just endpoints.
  • Dynamic Service Mesh: Creates an adaptable system that can expand or shrink based on local conditions.

Conclusion

mimik’s platform transforms data analytics by enabling contextual on-device operations. This brings real-time processing, enhanced security, and reduced latency directly to where data is generated. By integrating endpoint devices as integral parts of the system, mimik supports offline-first, real-time data processing and decision-making directly on the device. This approach significantly reduces latency, adheres to data privacy and sovereignty, and securely exposes outcomes via standard APIs with granular access control right from the end-device.  By enabling applications to utilize the power of devices—from smartphones and tablets to drones and microcontrollers—with advanced cloud-native functionalities, mimik ensures seamless integration and discovery of AI agents and microservices workloads.

The ability to perform sophisticated analytics directly on devices paves the way for faster and more reliable decision-making across various industries, driving superior sustainability, adaptability, efficiency, performance, and user experience. mimik is not only transforming how data is processed and utilized in real time but also enabling new business models and opportunities within the Data-as-a-Service ecosystem. With mimik, the future of intelligent systems lies in the seamless and efficient management of data, driving innovation and unlocking unprecedented value on-device.

The post On-Device Analytics and Observability: New Frontiers in the Data Landscape first appeared on mimik Technology Inc.

]]>
Bridging Worlds: Hybrid Edge Cloud and Distributed Ledger Reshaping Digital Services & Knowledge Exchange https://miphawell.com/bridging-worlds-hybrid-edge-cloud-and-distributed-ledger-reshaping-digital-services-knowledge-exchange/ Tue, 09 Jan 2024 14:29:18 +0000 https://miphawell.com/?p=82485 In today’s dynamic digital landscape, the convergence of edge computing and distributed ledger technology unveils a transformative potential that extends far beyond mere technical buzzwords. Beyond the limelight of these innovations, lies the creation of a monumental knowledge-as-a-service economy that could revolutionize the existing multi-trillion-dollar data broker system — an ecosystem riddled with opacity, guesswork, […]

The post Bridging Worlds: Hybrid Edge Cloud and Distributed Ledger Reshaping Digital Services & Knowledge Exchange first appeared on mimik Technology Inc.

]]>
In today’s dynamic digital landscape, the convergence of edge computing and distributed ledger technology unveils a transformative potential that extends far beyond mere technical buzzwords. Beyond the limelight of these innovations, lies the creation of a monumental knowledge-as-a-service economy that could revolutionize the existing multi-trillion-dollar data broker system — an ecosystem riddled with opacity, guesswork, data scraping, and unauthorized information exploitation.

Hybrid Edge Cloud (HEC) pioneered by mimik is a pivotal player, empowering smart devices to act as cloud servers to locally run workflows, seamlessly share data, knowledge, and computing resources, collaborate via APIs, and transfer microservices among devices, fostering an interconnected digital ecosystem.

Complementing this, distributed ledger technology, particularly within the realm of private chains and smart contracts tailored for a smaller set of stakeholders, takes charge of managing financial transactions in a secure, transparent, traceable, immutable, and efficient way. It’s crucial to distinguish this approach from the widely publicized crypto hype. Unlike the public blockchains associated with cryptocurrencies, private chains operate within a confined network, ensuring computational efficiency and scalability. These private chains, governed by smart contracts, enforce transparent and predefined rules agreed upon by a smaller set of stakeholders. This computational efficiency is a cornerstone, allowing for streamlined transactions and reducing the computational cost and environmental footprint associated with large-scale public blockchain networks.

Imagine a world where mimik’s HEC serves as an open arena, allowing services to fluidly run on smart devices and effortlessly discover, collaborate, and exchange knowledge autonomously. Envision a smart thermostat communicating with your wearable to read your body temperature and then dynamically optimize temperature and energy consumption based on the current context. Through this framework, services can be hosted on devices and can collaboratively navigate tasks creating an environment far removed from centralized pre-defined control and guesswork.

In this digital milieu, microservices play a pivotal role, akin to specialized functions executed by smart devices — language translation, image processing, or data analysis, for instance. Traditionally, confined to centralized servers in data centers, mimik’s platform revolutionizes this landscape, facilitating the seamless traversal of microservices across devices. For instance, your smartphone can momentarily borrow an image processing microservice from your tablet, enhancing its capabilities instantly. This decentralized sharing of microservices endows devices with dynamic prowess, optimizing their functions collectively.

While mimik platform enables choreography of this seamless flow of microservices, private chains and tailored smart contracts within the network ensure secure and efficient transactions with audit trails and logs. Consider an autonomous drone in need of an AI model for fault detection in an industrial site or a port facility. mimik’s platform seamlessly facilitates the drone’s access to this model from another smart device, gateway, or a cloud server through a secure exchange within the private chain. The implementation of smart contracts ensures the integrity and transparency of this exchange, fostering a fair and efficient transaction environment.

Yet, the transformative power of this collaborative ecosystem goes beyond mere technical advancements. It envisions a shift — a monumental departure from the prevailing data broker system — to an automated, traceable knowledge-as-a-service economy. Here, stakeholders are rewarded for contributing accurate, valuable information and collaborating seamlessly. This paradigm disrupts the underhanded practices of the data broker economy, replacing opacity and guesswork with transparency and collaboration, fostering an ecosystem where all stakeholders are fairly rewarded for their contributions.

In essence, this convergence of HEC and distributed ledger not only elevates the efficiency, security, and autonomy of device collaboration but lays the foundation for a revolutionary knowledge-as-a-service economy — a beacon of fairness, transparency, and collaboration across diverse industries.

The post Bridging Worlds: Hybrid Edge Cloud and Distributed Ledger Reshaping Digital Services & Knowledge Exchange first appeared on mimik Technology Inc.

]]>
The four stages of edge AI https://miphawell.com/the-four-stages-of-edge-ai/ Mon, 27 Nov 2023 20:45:36 +0000 https://miphawell.com/?p=81939 In the rapidly evolving world of edge computing and artificial intelligence (AI), there are several crucial stages to consider. This blog delves into the complexities and innovations at each stage, beginning with Local Execution, where AI models are deployed directly on edge devices for real-time data processing. We then explore Contextualization, focusing on the local […]

The post The four stages of edge AI first appeared on mimik Technology Inc.

]]>
In the rapidly evolving world of edge computing and artificial intelligence (AI), there are several crucial stages to consider. This blog delves into the complexities and innovations at each stage, beginning with Local Execution, where AI models are deployed directly on edge devices for real-time data processing. We then explore Contextualization, focusing on the local handling of contextual information for personalized responses. The third stage, AI to AI Communication, examines the critical coordination between multiple AI nodes, facilitated by edge microservices. Finally, AI-adapted Choreography highlights how multiple AI models across an edge network can dynamically interact with each other, optimizing overall system performance. Through these stages, the role of mimik technology emerges as pivotal, enabling seamless integration and efficient operation of AI models in edge computing environments.

Stage 1: Local Execution

In this stage, the focus is on deploying the AI model at the edge, which means running the model directly on the device that generates the data. Typically, the model is trained in the cloud and then pushed to the edge devices such as cameras or sensors. The purpose is to perform real-time recognition or analysis of data streams locally without relying on constant communication with the cloud.

The information generated by the local execution can be handled in different ways. If the recognition results are conclusive, only the result is sent to the cloud for further processing or storage. However, if the recognition is inconclusive, the image or relevant data may be sent to the cloud to retrain the model. Additionally, a lower resolution of the data stream can be archived for reference purposes.

For example, consider a security camera system using edge computing. The camera captures live video footage and runs an AI model locally for real-time object detection. Instead of sending every frame to the cloud for analysis, the AI model is deployed directly on the camera. The camera processes the video stream locally, identifies objects of interest, and sends only the relevant information, such as detected objects and their locations, to the cloud for further processing or storage.

It is essential to separate the model from the execution process because models need regular updates and the ability to manage the payload remotely. Mimik enables this separation by treating the model as a part of the edge microservice running on the device. The microservice acts as an interface between the cloud and the AI process, abstracting the handling of model updates from the recognition process. Another edge microservice handles the results, whether sending them to the cloud or other local systems. This ensures that the model can be easily updated and fine-tuned without disrupting the process of recognition or analysis.

By exposing the capabilities of handling the model and results as a local API, mimik simplifies the development process of AI solutions, making integrating edge computing into the workflow easier.

Stage 2: Contextualization

In this stage, the model is executed locally, and the handling of the context in which the process occurs is also done locally. The context refers to events received by the device running the process or other devices within the same cluster, such as events triggered by user inputs through a UI or sensor inputs.

Local contextualization allows for the personalization of the model based on user preferences or specific scenarios. By processing events locally, edge devices can provide tailored experiences or responses without constantly sending data to the cloud for analysis and decision-making.

For example, consider an intelligent home system using edge computing. The system includes various devices like smart speakers, cameras, and sensors. Each device runs AI models locally to process data and respond to user commands. When a user speaks a command to a smart speaker, the AI model on the speaker processes the command locally, taking into account the context of the user’s preferences and the current state of the home environment. The speaker can provide personalized responses or control other devices within the cluster based on local contextual information.

Mimik achieves contextualization by running multiple edge microservices on the same node and facilitating interaction with other edge microservices on different nodes. This decentralized approach minimizes the need for data transfer to the cloud, as the devices within the cluster can communicate and share contextual information directly.

Stage 3: AI to AI communication

In this stage, there is the realization that a complex system at the edge will be made of many nodes that can have an AI handling the node’s logic. In this environment, while the execution of the model happens at the edge, the integration between each AI is coordinated via the cloud. It must be possible to allow direct communication between each AI to handle local decision-making by having the different AI either exchange the models or exchange the events generated by the API process using the models.

For example, consider an autonomous driving system using edge computing. The system comprises multiple edge devices, such as cameras, LiDAR sensors, and control units, each running its own AI model for perception, decision-making, and control. These devices must exchange information and coordinate safe and efficient driving decisions. Instead of relying solely on a centralized system in the cloud, direct communication between the edge devices’ AI models is essential for local decision-making.

Mimik enables AI-to-AI communication by allowing models to be handled by edge microservices and creating an ad-hoc edge service mesh. This allows direct communication between edge microservices within the same node or between edge microservices running on different nodes. With mimik, multiple AIs at the edge can exchange information or models with a well-defined contract, facilitating coordinated actions without heavy reliance on a centralized cloud system.

Stage 4: AI-adapted choreography

In this stage, the focus is on dynamically choreographing the behavior of multiple AI models across the edge network to optimize overall system performance, resource allocation, and coordination. The communication between AI models within each node and between nodes adapts to maximize the relationship of a collection of nodes.

For example, let’s consider a smart city infrastructure using edge computing. The infrastructure consists of various edge devices deployed throughout the city, such as traffic cameras, environmental sensors, and smart streetlights. Each device runs its AI model to perform specific tasks like traffic monitoring, air quality analysis, and intelligent lighting control.

In the AI-adapted choreography stage, the AI models within each device collaborate and communicate to optimize the overall performance of the smart city infrastructure. The models exchange information about traffic conditions, environmental data, and lighting requirements. Based on this information, they dynamically adapt their behavior to ensure efficient traffic flow, minimize energy consumption, and respond to changing environmental conditions.

Since these systems are generally developed by many organizations (different standards, different protocols), the context and the AI of each system component will also help define the protocol between the components, allowing components that are not necessarily made to communicate with each other to exchange information.

Mimik plays a crucial role in enabling AI-adapted choreography by providing the infrastructure for communication and coordination between the AI models across the edge network. It allows the AI models running on different devices to exchange data, share insights, and collectively make decisions to optimize the operation of the smart city infrastructure. Mimik’s edge service mesh facilitates the dynamic choreography of AI models and ensures efficient collaboration.

In summary, in the AI-adapted choreography stage, mimik enables the dynamic coordination and optimization of multiple AI models across an edge network, allowing them to collectively achieve better system performance, resource allocation, and coordination in complex scenarios like a smart city infrastructure.

Conclusion

The role of mimik, as mentioned in the text, is to enable these stages by treating the AI model as a part of the edge microservice running on the device. It abstracts the handling of model updates from the recognition process and facilitates the exchange of information between edge microservices. By providing a local API and creating an ad-hoc edge service mesh, mimik simplifies the development process and integration of edge computing into AI workflows.

References

  1. “Edge Computing: A Survey” by Shi et al. (IEEE Access, 2016):
    • This survey paper overviews edge computing, its challenges, and potential applications, including AI at the edge.
  2. “Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing” by Satyanarayanan et al. (Proceedings of the IEEE, 2019):
    • This paper discusses the concept of edge intelligence, including the execution of AI models at the edge and the benefits it brings.
  3. “Bringing AI to the Edge: Distributed Learning in IoT Systems” by Yang et al. (IEEE Network, 2019):
    • This article explores the challenges and techniques for deploying AI models at the edge, including model training and coordination in distributed IoT systems.
  4. Official mimik documentation and resources:
    • To understand the specific capabilities and features of mimik in enabling edge computing and AI integration, you can refer to the official mimik documentation, whitepapers, and developer resources available on the mimik website or other official channels.

The post The four stages of edge AI first appeared on mimik Technology Inc.

]]>
Endpoint Device Security, The Missing Link in SASE https://miphawell.com/endpoint-device-security-the-missing-link-in-sase/ Fri, 20 Oct 2023 09:07:17 +0000 https://stg-2x.miphawell.com/?p=79738 Many organizations are turning to Secure Access Service Edge (SASE) solutions to fortify their security posture.

The post Endpoint Device Security, The Missing Link in SASE first appeared on mimik Technology Inc.

]]>
Introduction

In today’s rapidly evolving digital landscape, ensuring the security of endpoint devices has become more critical than ever before. The proliferation of remote work, mobile devices, and cloud-based applications has introduced new challenges for safeguarding sensitive data and maintaining network integrity. In response to these challenges, many organizations are turning to Secure Access Service Edge (SASE) solutions to fortify their security posture.

 

Traditional Security Implementation

Traditional security models are often described as a “castle-and-moat” approach. In this model, the organization’s network is considered the castle, and security solutions such as firewalls and VPNs act as the moat. Everything inside the network perimeter is considered trusted, while external elements are treated with suspicion.

  1. Perimeter-based Security: Traditional security relies on a perimeter-based model where the organization’s network is the fortress, and security solutions (firewalls, VPNs, etc.) serve as the protective moat. Elements inside the perimeter are trusted, while anything external is treated cautiously.

  2. Centralized Security Appliances: Security solutions, like firewalls and intrusion prevention systems, are often centralized, especially at the data center. This often results in traffic being backhauled from remote locations or branches to this central point for inspection.

  3. VPN for Remote Access: Remote users typically connect to the network using VPNs, which can introduce latency since traffic from remote users is tunneled to the central office before accessing the internet or other resources.

  4. Disparate Solutions: Traditional setups might have various standalone solutions – a firewall from one vendor, a secure web gateway from another, VPNs from another, etc. This can complicate integration and management.

SASE Security Implementation

While traditional security implementations were well-suited for a time when most resources and users were centralized, the shift towards cloud services, remote work, and mobile users has revealed its limitations. SASE aims to address these modern challenges by offering a more flexible, integrated, and decentralized cloud-first security solution optimized for the current state of enterprise computing. Here’s how it differs:

  1. Identity and Context-aware Security: SASE treats every access attempt as untrusted instead of relying on a network perimeter. Access is granted based on the user’s or device’s identity, the access request’s context, real-time analytics, and other factors.

  2. Decentralized Security Services: Security is implemented closer to the point of access, often at the edge or as a cloud service. This means users connect to their nearest security service point, reducing latency.

  3. Integrated Suite of Services: SASE aims to combine various security services like Secure Web Gateways (SWG), Cloud Access Security Brokers (CASB), Firewall as a Service (FWaaS), Zero Trust Network Access (ZTNA), etc., into a unified platform. This integrated approach simplifies management and ensures that security policies are applied everywhere.

  4. Optimized for Cloud and Mobile: Traditional security models have shown strains as organizations have shifted to cloud services and remote work. SASE is designed with the cloud and mobility in mind, ensuring that security policies are consistently applied no matter where users are or which devices they use.

  5. Scalable and Flexible: Being cloud-native, SASE solutions can scale as required and adapt quickly to changing business needs.

The Role of the Device in SASE Implementation

While SASE drastically changes the enterprise security approach, it still considers the end-user device, whether mobile, non-mobile, or IoT, as an integral part of the security solution. In a SASE (Secure Access Service Edge) solution, the services primarily reside in the cloud, leveraging a global network of points of presence (PoPs) to provide security and networking services as close to the end-user or device.

However, specific components or agents might run on the end-user’s device to interact with these cloud-based services. Here’s what typically runs on the device in a SASE architecture:

  1. Endpoint Agent/Client Software: This is a lightweight software client installed on the user’s device (laptop, smartphone, tablet, etc.). The agent is responsible for:

    • Initiating secure connections to the SASE cloud.

    • Enforcing local security policies.

    • Monitoring device health and security posture.

    • Redirecting traffic to the SASE service for security checks and policy enforcement.

  2. Zero Trust Network Access (ZTNA) Components: ZTNA ensures that every access attempt to resources, even from within the network, is authenticated and verified. The endpoint agent often includes components to enforce ZTNA principles, such as:

    • Identity verification.

    • Context-aware access controls (based on device health, location, user role, etc.).

    • Application-level connectivity (connecting the user only to the specific applications they need, not the entire network).

  3. Data Encryption Tools: The agent ensures that data in transit is encrypted when connecting to the SASE cloud or other organizational resources.

  4. Local Security Services: While most security services in a SASE architecture are cloud-based, certain local checks or policies might still be enforced on the device. This can include:

    • Local firewall rules.

    • Host intrusion prevention systems.

    • Data loss prevention checks for sensitive data.

  5. Security Posture Check: Before granting access to resources, the SASE solution might check the device’s security posture. This can involve verifying:

    • Antivirus/antimalware status.

    • Operating system and software patch levels.

    • Compliance with organizational security policies.

  6. Management and Configuration Tools: These allow IT teams to configure the agent’s behavior, update policies, and integrate with other IT management tools.

  7. Logging and Monitoring Components: The agent might also collect logs and other relevant data for analysis. This information can be sent to the central SASE solution for anomaly detection, analysis, and reporting.

The exact components and functionalities can vary depending on the specific SASE solution provider and the organization’s requirements. However, SASE aims to keep the on-device footprint lightweight and leverage the cloud for most heavy lifting, ensuring consistent policy enforcement and optimal performance regardless of the device’s location. These aims do not consider the latest edge-in approach and microservice architecture developments, which the mimik platform enables. This includes:

  • Running microservices that expose API directly on devices

  • Handling ad-hoc edge service meshes where microservices interact with each other directly without going through the cloud

The Role of mimik HEC in SASE

Implementation

Now, let’s explore how mimik Hybrid Edge Cloud (HEC) software platform can contribute to the implementation of SASE, enhancing its capabilities for securing endpoint devices.

The mimik HEC is crucial in enhancing SASE implementation by providing innovative solutions and components that ensure secure, efficient, and context-aware protection for endpoint devices. Here’s how mimik contributes:

  1. Distributed Computing: mimik facilitates distributed computing at the edge, reducing latency and enabling real-time analytics and response, essential for security solutions like SASE.

  2. Edge Server Capabilities: Devices powered by mimik can act as edge cloud servers, deploying SASE solutions closer to data sources or users, improving performance, and reducing the load on central servers.

  3. Interoperability: mimik’s platform fosters interoperability between different cloud services, edge devices, and on-premises resources, a critical requirement for implementing SASE in a hybrid environment.

  4. Resource Optimization: Implementing SASE solutions with mimik edgeEngine on the mimik hybrid edge cloud platform can optimize network and computing resource utilization by balancing the load between cloud, edge, and on-premises.

  5. Enhanced Security: Integrating security microservices at the edge using mimik edgeEngine enables granular and context-aware security enforcement, essential for Zero Trust Network Access (ZTNA) and Secure Web Gateway (SWG) components of SASE.

Edge-in Approach with mimik

One of the unique aspects of mimik’s contribution is the ability to move or complement SASE functions further to the edge, even directly on the user or IoT device. This approach enables a more contextualized and efficient security strategy, allowing for device-to-device interaction that is impossible in a traditional cloud-first SASE implementation.

mimik’s Impact on Key SASE Components

Looking at the significant components of a SASE architecture, it is possible to understand the impact of an edge-in approach enabled by the mimik platform:

  • Cloud Access Security Broker (CASB): By running CASB as an edge microservice on the device itself (eCASB), organizations can benefit from:

    • Decentralized Data Management: As cloud applications proliferate, so does the data between devices and these applications. With edge computing capabilities from solutions like mimik edgeEngine, there’s potential for more localized data processing and decision-making at the data source before sending it out. This can be leveraged to inspect data locally on a device before it’s sent to or received from a cloud service, aligning with some CASB functions.

    • Local Policy Enforcement: With the ability to execute applications and processes at the edge, organizations could run lightweight, localized CASB-like functions on the device. This would mean real-time policy enforcement even before data or requests hit the main CASB solution in the network path, allowing the ability to do multi-cloud brokering right from the device (at the edge) instead of in the cloud.

    • Enhanced Performance: By integrating edge capabilities with CASB functionalities, certain processes can be offloaded to the edge, reducing latency. For instance, initial policy checks or data classifications, augmentation, and tagging can be done on-device, reducing the need for all traffic to be routed through a central CASB solution.

    • Integration with Other Edge Services: As part of a broader edge ecosystem, CASB functionalities can be combined with other edge services, enabling more comprehensive security and data management solutions tailored for specific environments or use cases.

    • Custom CASB Solutions for Unique Use Cases: Developers can potentially build custom CASB solutions tailored to specific organizational needs or niche applications, leveraging the flexibility and capabilities provided by mimik edgeEngine.

  • Zero Trust Network Access (ZTNA): mimik platform took a zero-trust network approach as a core feature of the edge system. This approach allows edge engine to provide the following:

    • Localized Access Control: With computing capabilities extended to the edge; access decisions might be made locally, right where the request originates. This could result in reduced latency and more efficient access controls, as not every decision must be routed through a centralized authority.

    • Enhanced Security for IoT Devices: IoT devices can often be weak points in a network. If these devices are empowered with edgeEngine capabilities and integrated with ZTNA principles, they could have enhanced security postures, mitigating some of the risks associated with IoT deployments.

    • Integration with Decentralized Applications: As more applications and services become decentralized and move to the edge, integrating ZTNA principles becomes crucial. Using a platform like mimik edgeEngine, developers could create applications with built-in ZTNA functionalities tailored for specific edge use cases.

    • Continuous Authentication and Authorization: ZTNA emphasizes continuous verification, not just at the beginning of a session. With edge computing capabilities, this continuous check can be done more efficiently, utilizing real-time device data.

    • Micro-segmentation at the Edge: ZTNA often employs micro-segmentation to isolate and protect network resources. With edgeEngine, this segmentation could be extended to the edge, providing more granular isolation and protection of resources, data, and services.

  • Next-Generation Firewall (NGFW): The mimik edgeEngine resides on top of the operating system and, therefore, does not have deep access to the network stack and does not enable the implementation of features like DPI. However, by implementing an API Gateway, it is possible for a microservice running within the edge engine to enable the following features:

    • Localized Traffic Inspection: With applications and services running on the edge, localized traffic inspection and filtering at the message level can potentially be done. Rather than sending all traffic through a central NGFW, initial inspections and policy checks could be performed on-device or at the edge, enhancing responsiveness and reducing unnecessary traffic loads on central security appliances.

    • Context-rich Policies: The edgeEngine can provide granular, context-rich data from devices, given its edge-centric architecture. This context can be valuable for NGFW functions, allowing for dynamic and adaptive security policies based on real-time device status, user behavior, location, etc.

    • Protection of IoT Devices: IoT devices, often seen as vulnerable network points, could benefit from localized firewall capabilities. By integrating NGFW functionalities at the edge, there’s potential for better security postures for IoT deployments, with immediate threat detection and response.

    • Integration with Edge Services: As more services move to the edge, there’s an increasing need to ensure these services are secured. By integrating NGFW capabilities into edge-based services powered by mimik edgeEngine, there’s an opportunity for holistic security that’s tailored for edge-specific scenarios.

    • Decentralized Threat Detection and Response: By leveraging edge computing capabilities, threat detection and response can potentially be decentralized. If an anomaly or potential threat is detected on a device or within a network segment, immediate action can be taken at the edge, even before the central NGFW or security operations center is alerted.

    • Scalability and Adaptability: With the growth of connected devices and increasing network complexity, scalability becomes a concern for traditional NGFWs. By offloading some functionalities to the edge, there’s potential for more scalable security solutions that adapt to changing network conditions and demands.

  • Secure Web Gateway (SWG): Allowing microservice to run directly on the device on top of the mimik edgeEngine and this behind an API Gateway, it is possible to enable an eSWG which will have the following capabilities:

    • Real-time Content Filtering: An eSWG running on the device can provide real-time content filtering, blocking malicious or inappropriate content before it reaches the user’s device.

    • Local Policy Enforcement: Organizations can implement customized content filtering policies at the edge, ensuring that users are protected from web-based threats even when they are not connected to the corporate network.

    • Reduced Latency: By offloading content filtering to the edge, latency is minimized, resulting in faster web access for users.

    • Improved Performance: An eSWG can optimize web traffic, reducing the load on central SWG solutions and improving overall network performance.

    • Integration with Local Services: Organizations can integrate their eSWG with other local services and security components to provide a comprehensive security posture.

    • Enhanced Privacy: With an eSWG at the edge, user data remains on the device, enhancing privacy and reducing the need to send user data to centralized SWG solutions.

 

Conclusion

Securing endpoint devices is paramount in the ever-evolving landscape of cybersecurity and remote work. Traditional security models have limitations, especially in the face of the cloud, mobility, and the Internet of Things (IoT). Secure Access Service Edge (SASE) represents a new paradigm in security, offering an integrated, cloud-native, and context-aware approach. The mimik HEC is pivotal in enhancing SASE implementation by enabling distributed computing at the edge, fostering interoperability, and providing the tools for secure, efficient, and context-aware protection. By moving or complementing SASE functions to the edge, mimik’s innovative approach enhances security, reduces latency, and opens new possibilities for device-to-device interactions, bolstering the security posture of organizations in a rapidly changing digital world. With SASE and mimik, the future of endpoint security looks brighter, more efficient, and more resilient than ever before.

The post Endpoint Device Security, The Missing Link in SASE first appeared on mimik Technology Inc.

]]>
Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere https://miphawell.com/beyond-boundaries/ Wed, 16 Aug 2023 02:00:00 +0000 https://stg-2x.miphawell.com/?p=79190 In a cloud-first architecture, API gateways play a crucial role in enabling communication between different cloud services and applications.

The post Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere first appeared on mimik Technology Inc.

]]>
In a cloud-first architecture, API gateways play a crucial role in enabling communication between different cloud services and applications. They act as a central point of control and provide a unified interface to the clients, making it easier to manage and monitor the overall system.  API gateways also provide a layer of abstraction between the client and the cloud services hence allowing developing different applications while still using the same services.

In essence, an API gateway is a server that acts as an intermediary between the client and the cloud services performing many tasks such as authentication, rate limiting, caching, and protocol translation. Therefore, the API gateway can improve the performance, scalability, and security of the overall system architecture. It is commonly used in microservices and serverless architectures.

In the conventional API Gateway market, various vendors offer API Gateway solutions for managing, securing, and exposing APIs to external or internal applications. The main players are Amazon API Gateway, Kong, Rapid, Google Cloud (Apigee) and Azure API Management, among others. They offer different solutions based on their functionality and features such as Proxy, Transformation Gateways, Security, Orchestration and Monetization. Developers can choose the right one according to their specific requirements and use cases.

Examining this offering, we can identify three distinct types of API gateways: façade, exposure, and listening endpoint.

The first one, the API Gateway, acts as a facade for service implementations that operate in separate environments. The API Gateway serves as a single-entry point for all incoming API requests, abstracting the complexity of underlying microservices or distributed systems. These gateways are engineered to manage specific protocols such as HTTP and WebSocket, and they primarily focus on addressing security concerns, particularly TLS security. By using an API Gateway as a facade, organizations can simplify the management of their APIs and services, improve security, and enhance the developer experience for API consumers.

The second one, often call API Exposure Gateway, is an API Gateway which alters or enhances the implemented API that runs on a different environment. It focuses on making APIs accessible to external consumers, partners, or third-party developers. The main goal of an API Exposure Gateway is to facilitate secure, controlled, and efficient access to APIs while ensuring a positive developer experience. It can implement business logic, including caching, throttling, and even metering for billing purposes. API Exposure Gateways are crucial for businesses looking to expose their APIs to a broader audience, foster innovation, and create new revenue streams through API monetization. By providing a secure and controlled environment for API consumption, these gateways enable organizations to maximize the value of their APIs while minimizing risks.

The last one, the listening endpoint API terminates the network connection, is commonly utilized in serverless environments to instantiate the process required for executing the operation requested by the API call. This endpoint acts as an entry point for clients to access the functionality provided by the API, and it’s responsible for processing incoming requests, executing the appropriate actions, and returning the expected responses. In most cases, the API and the service function within the same environment.

Though the gateways vary in their execution, their primary goal remains consistent to act as a central entry point and intermediary for managing, securing, and exposing APIs of cloud services to external consumers or internal applications. It enables cloud services to be utilized by other cloud services or client applications without needing to comprehend the service’s inner workings. This gateway can operate in the cloud or near the client applications as an edge cloud broker.

Suppose for a moment, this cloud-centric approach didn’t exist, and it was feasible to run microservices (or functions as a service) at the edge within the device or system hosting client applications. In that case, a reverse API gateway becomes necessary to expose these microservices. However, instead of exposing cloud services to client applications, it focuses on exposing edge microservices services to either client applications or other edge microservices running on different nodes or cloud services. Consequently, each node within a system serves as an individual server running microservices with exposed APIs at the software level, establishing a ad hoc edge service mesh among all nodes capable of discovering one another.

In this edge first scenario, the reverse gateway would function as a local API Gateway, managing the microservices within the device itself. It would have a vital role in managing and securing the communication between microservices and client applications. By functioning as a local API Gateway, it would manage, secure, and optimize API traffic within the device or system, providing a unified entry point for accessing microservices and improving overall performance and security. Moreover, the local API Gateway would also enable better resource utilization and faster response times as the microservices would be running in the same environment as the client applications.

This reverse API gateway is a natural next step in the evolution of the API Architecture, well described in the Netflix technology blog.

Left to right: 1) Accessing the monolith via a single API, 2) Accessing microservices via separate APIs, 3) Using a single API gateway, 4) Accessing groups of microservices via multiple API gateways. Source: Netflix Technology Blog

Edge microservice can either access each other edge microservice on different nodes directly without going thru the cloud via reverse API gateway or access cloud microservice via a single API gateway.

The concept of device-as-a-service will then be established, allowing client applications to utilize features from a single device or a collection of devices through a series of APIs without needing to comprehend the inner workings of the implementation. This will spark a surge in innovation as it enables the development of applications using systems without requiring expertise in those specific systems. As an example, considering the automobile industry’s ongoing shift towards SDV, it is crucial to begin revealing car functionalities to developers outside of the automotive realm to harness the creative potential within the mobile app industry. A reverse API gateway is essential for accomplishing this objective.

mimik’s edgeEngine provides this reverse API gateway, enabling each node to serve as a data source at the application level. mimik’s edgeEngine allows client applications to utilize features from a single device or a collection of devices through a series of APIs without needing to comprehend the inner workings of the implementation. This edegEngine comprises an API gateway, an OS-agnostic runtime environment, a discovery service for nodes and edge microservices, and an edge analytic platform that enables each node to serve as a data source at the application level. This enables the development of applications using systems without requiring expertise in those specific systems, sparking a surge in innovation.

The post Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere first appeared on mimik Technology Inc.

]]>
mimik for Digital Twin https://miphawell.com/mimik-for-digital-twin/ Thu, 10 Aug 2023 06:34:01 +0000 https://stg-2x.miphawell.com/?p=79436 mimik for Digital Twin

The post mimik for Digital Twin first appeared on mimik Technology Inc.

]]>
Abstract

A digital twin is a virtual representation of a physical object, process, or system. It is a computerized model that simulates the behavior of a real-world object or system in real-time, providing a detailed and accurate reflection of its physical counterpart.

A digital twin is created by collecting data from various sources, such as sensors, cameras, and other IoT devices, and processing that data using machine learning algorithms and other analytical tools. The resulting model can monitor, analyze, and optimize the physical system’s performance and predict future behavior and outcomes.

Digital twins are commonly used in manufacturing, aerospace, and energy industries. They can be used to simulate the operation of complex machinery, equipment, and systems and identify potential issues or inefficiencies before they occur in the real world. They are also used in building design and construction to optimize performance, maintenance, and energy efficiency.

A digital twin is composed of two main steps:

  1. development phase, pre-production (aka pre-prod)

  2. deployment and update phase in production (aka post-prod)

Pre-prod digital twin

Looking at the lifecycle development of a solution that involves embedded software components (a car, a manufacturing line, etc..) utilizing QNX, many variants of Linux, Android, and even IOS when a user phone is involved, a developer implementing a new feature does not have the actual environment available as a cloud developer has a Development Environment, aka DEV, to do the implementation and QA for testing the compliance of the implementation. To remediate this problem, a simulation of the environment has to be created. This is where the need for a pre-prod digital twin is emerging.

Adopting modern development solutions and creating an environment in the cloud is a natural solution for such simulation. And because in a cloud environment, the resource is virtualized and generally pooled using Kubernetes orchestrator, a natural consequence is to containerize every simulation component. The developer implementing a new feature must dynamically deploy images and containers using Kubernetes.

This will work well assuming two following conditions:

  1. Any legacy software that runs in an actual environment needs to be containerized.

  2. The simulation in the cloud environment must closely mimic the actual environment.

These two conditions are difficult to realize since, in the actual environment for embedded systems, the usage of real-time operating systems is frequent, and containerizing legacy components has limitations when dealing with user interfaces and multi-processes within the same container. This means that once QA is passed in the cloud, transferring the new feature to the actual environment generally leads to new problems, making the whole cloud testing obsolete.

Another approach to creating a pre-prod digital twin is replicating the actual environment in the cloud. For that, it is often necessary to run an RTOS like QNX. However, as most of the container technologies (e.g., docker) depend on Operation System’s functions (e.g., c-group), it is not possible to run these containers on QNX. This is why there is a need for a technology that provides a run-time independent from the operating system. And this is what mimik edgeEngine provides.

Running QNX in the cloud and mimik edgeEngine on top of QNX in the cloud allows a developer to implement microservices or function-as-a-service. It is possible to have a seamless transition from the pre-prod digital twin to the actual environment.

Post-prod digital twin

Once the feature is deployed in real systems, it is essential to have a feedback loop to refine the simulation. It allows developers and system analysts to understand the behavior of the actual system and how these behaviors match the behavior of the simulated environment. And this is where a post-prod digital twin needs to be created.

One solution is to use the pre-prod digital twin instance to implement the post-prod digital twin. However, this implies the need to transfer a large amount of data to replicate in the cloud the context of the actual environment. This can be a source of many problems:

  1. Cost: the more data to be transferred, the more cost will be generated, either the cost of transport or the cost of processing, in particular, if it is to deal with low-level signals.

  2. Power consumption: it generally consumes more power to transmit data to a network than to process data locally and transmit results.

  3. Privacy: in some cases, the data to be transmitted is about the user and, therefore, transmitted data to the cloud may be breaching privacy regulation

One solution is to split the pre-prod digital twin into two parts, one part running in that existing system and the other as a consolidation in the cloud since one aspect of running in the cloud is to deal with multiple actual systems (e.g., cars) and therefore avoid bias when extracting a generic behavior.

Technology is needed to allow microservices to run in any environment (regular OS. real-time OS, main CPU, controllers) to do the pre-analysis and send smart signals to an aggregated simulation running in the cloud. And this is what mimik edgeEngine and its different editions (standard, main/child, controller/worker) provide.

The post mimik for Digital Twin first appeared on mimik Technology Inc.

]]>
Harnessing the Power of Hybrid Edge Cloud: Revolutionizing App Development with mimik https://miphawell.com/harnessing-the-power-of-hybrid-edge-cloud-revolutionizing-app-development-with-mimik/ Wed, 19 Jul 2023 17:00:00 +0000 https://miphawell.com/?p=81264 Introduction Recently, I have been receiving numerous inquiries from individuals who are intrigued by the concept of Hybrid Edge Cloud (HEC). They are keen to understand its practical applications and which types of applications can benefit the most from this innovative approach. In response to their curiosity, I have decided to shed some light on […]

The post Harnessing the Power of Hybrid Edge Cloud: Revolutionizing App Development with mimik first appeared on mimik Technology Inc.

]]>
Introduction

Recently, I have been receiving numerous inquiries from individuals who are intrigued by the concept of Hybrid Edge Cloud (HEC). They are keen to understand its practical applications and which types of applications can benefit the most from this innovative approach. In response to their curiosity, I have decided to shed some light on the versatility of HEC as a modern approach for cloud-native applications. While it is applicable to all applications, I will also highlight specific use cases that exemplify its potential and advantages.

In today’s rapidly evolving digital landscape, app developers are constantly seeking ways to optimize their applications. The HEC from mimik provides a revolutionary solution that combines the power of edge computing and cloud computing. It offers a host of benefits such as enhanced performance, reduced latency, data privacy, scalability, and cost and energy efficiency. This powerful combination has piqued the interest of developers across various industries.

In this blog, I aim to demystify the concept of HEC and explain why it is a modern approach suitable for all cloud-native applications. Additionally, I will delve into specific use cases that highlight the remarkable potential of HEC, showcasing its transformative impact in real-world scenarios.

Let’s dive in and explore how HEC can revolutionize app development, optimize performance, and unlock new possibilities for innovation.

Understanding Hybrid Edge Cloud (HEC)

HEC represents the convergence of edge computing and cloud computing, combining the best of both worlds. With mimik’s platform, developers can leverage the power of local smart devices as cloud servers capable of deploying microservices and the central cloud to enhance their applications’ performance, scalability, and flexibility.

Seamless App Integration

One of the key advantages of mimik’s HEC platform is its seamless integration with existing app development processes. Regardless of the programming languages or frameworks you prefer, mimik supports a wide range of options, enabling you to build applications using the tools you’re already familiar with. Furthermore, the platform is built fully at the application level and works across various operating systems and device types, ensuring your applications can run seamlessly on different platforms.

Enhanced Performance and Latency Reduction

Latency can make or break user experiences, especially in applications that require real-time interactions. HEC can drastically reduce latency by minimizing network round trips while leveraging the computing resources on smart devices. This approach results in faster response times and improved user experiences. Whether it’s a video streaming application, a real-time multiplayer game, or an IoT-based solution, the HEC ensures optimal performance for your applications.

Data Privacy and Security

In an era of increasing data breaches and privacy concerns, protecting user data is paramount. With HEC, app developers can keep sensitive data on local smart devices, reducing the risk of unauthorized access or data breaches. By minimizing the need to transfer sensitive information to the cloud, developers can maintain greater control over data privacy and security. mimik’s platform also employs robust trustless security measures and encryption protocols, further safeguarding user data.

Scalability and Cost Efficiency

Scaling applications based on demand is a crucial aspect of app development. HEC enables developers to scale their applications effortlessly. By distributing computing resources across a network of smart devices, developers can handle increased workloads without relying solely on dedicated cloud infrastructure. This results in improved scalability and cost efficiency, as developers can optimize resource utilization and reduce reliance on extensive cloud resources.

Real-World Use Cases

HEC has found significant applications in various industries, showcasing its remarkable capabilities. Let’s explore two specific examples: automotive and industrial IoT. These use cases demonstrate the transformative impact of HEC.

Software-defined vehicles (SDVs) offer the advantage of adaptability and improvement through software updates, like smartphones. With separate hardware and software components, SDVs can harness technological advancements like hyper-personalization, autonomous driving, and safety features without costly hardware upgrades. This flexibility ensures up-to-date functionality, enhanced user experiences, and prolonged vehicle lifespan.

When combined with mimik’s HEC, SDVs gain two additional benefits. First, HEC reduces network dependency for SDVs. Traditional SDVs are often susceptible to network availability issues, which can hinder their operations. By leveraging mimik’s HEC, vehicle functions, exposed as function-as-a-service, can be meshed with various other functions at the edge of the network. This reduces reliance on a centralized network infrastructure, making SDVs more resilient to network conditions and enhancing their overall operational reliability.

Second, HEC enables hyper-personalization at the local level. SDVs have the capability to interact with multiple systems and devices, such as infotainment units and Telematics Control Units (TCUs) inside the vehicle, passenger smartphones, and smart city infrastructure. By leveraging HEC, SDVs seamlessly integrate and communicate with these vertically incompatible systems, facilitating a high degree of personalization for each user. This empowers SDVs to offer tailored experiences based on individual context and preferences, whether it’s adjusting the cabin environment, entertainment options, or personalized assistance features. HEC’s edge computing capabilities enable efficient and localized data processing, enabling SDVs to deliver real-time personalized experiences without heavy reliance on centralized cloud-based services.

Similarly, in industrial IoT applications, such as those found in the energy and mining sectors, connectivity to the cloud can be poor, expensive, or nonexistent. Energy and mining companies can leverage mimik’s HEC to overcome these challenges. By deploying edge endpoints and utilizing local computing resources, IoT devices can process and analyze data on-site, enabling real-time monitoring, predictive maintenance, and optimization of critical processes. This empowers companies to make timely decisions, improve operational efficiency, and reduce costs associated with transferring massive amounts of data to the cloud.

Conclusion

HEC empowers app developers with enhanced performance, reduced latency, data privacy, scalability, and cost efficiency. It finds practical applications in various industries, showcasing its remarkable capabilities. For the automotive industry, HEC empowers software-defined vehicles (SDVs) to function as local services, seamlessly integrating with local systems even in environments with limited network connectivity. In the energy and mining sectors, where cloud connectivity is often poor or nonexistent, mimik’s platform allows for on-site data processing, improving real-time monitoring and decision-making while keeping mission critical data private and secure.

These examples highlight the transformative impact of HEC, but it’s important to recognize that its benefits extend to a wide range of cloud-native applications. Whether it’s healthcare, retail, smart homes, or any other domain, HEC offers a pragmatic solution that enhances performance, reduces latency, ensures data privacy, enables scalability, and optimizes cost and energy efficiency.

By seamlessly integrating edge computing and cloud computing, mimik’s platform unlocks new possibilities for innovation, providing developers with the tools to create cutting-edge applications that meet the evolving needs of consumers and enterprises. It bridges the gap between local smart devices and the cloud, allowing for distributed computing and improved user experiences.

The post Harnessing the Power of Hybrid Edge Cloud: Revolutionizing App Development with mimik first appeared on mimik Technology Inc.

]]>
The Journey to Autonomy: Unleashing the Power Within https://miphawell.com/the-journey-to-autonomy-unleashing-the-power-within/ Sun, 28 May 2023 21:45:00 +0000 https://miphawell.com/?p=81259 Once upon a time, there lived a young and curious infant named Alex. Alex was a bright and adventurous boy, always eager to explore and learn from the world around him. As he grew older, his physical abilities and understanding of the world deepened. In his early years, Alex relied on simple means to fulfill […]

The post The Journey to Autonomy: Unleashing the Power Within first appeared on mimik Technology Inc.

]]>
Once upon a time, there lived a young and curious infant named Alex. Alex was a bright and adventurous boy, always eager to explore and learn from the world around him. As he grew older, his physical abilities and understanding of the world deepened.

In his early years, Alex relied on simple means to fulfill his needs. When hungry, he would cry out for attention, signaling his desire for nourishment. Like any caring parents, his mom and dad would respond by preparing a meal for him. But as Alex continued to learn and develop, something extraordinary happened — he started to become more and more autonomous.

No longer needing to cry for every meal, Alex’s mind evolved into a complex system of interconnected thoughts and capabilities. With a simple thought, he could decide whether to cook a meal at home using his own skills or venture out to discover the diverse culinary offerings of the city. He had become a self-sufficient entity, relying on his own intelligence and resources to navigate the world.

In this captivating analogy, Alex’s journey from infancy to autonomy mirrors the evolution our digital world needs. Today, internet applications are like Alex in his infancy, reliant on external resources and centralized cloud architecture for even the most basic tasks. However, for true autonomy to flourish, a transformation is required — a transition from dependence on centralized infrastructure to leveraging the local resources of the devices they run on, just like Alex relies on his brain and body parts as an adult.

Hybrid edge cloud (HEC) empowers these internet applications to tap into the local resources of the devices they reside on, like how Alex relies on his brain and body parts to function. It is a paradigm shift that unlocks the true power within, enabling applications to operate autonomously, adapt to their surroundings, and harness the potential of their host devices. Much like Alex’s transformation into a self-sufficient being, this transition from infancy to adulthood allows internet applications to mature and become capable of independent decision-making, reducing reliance on external resources.

By embracing this new era of technological innovation, we pave the way for an AI-enabled world where intelligent things can operate independently and make intelligent decisions. HEC acts as the missing link that harnesses the power of local computing resources available on each device. It reduces latency, improves performance, and enables real-time decision-making. Much like Alex’s journey towards autonomy, HEC fosters an environment where internet applications can mature and grow, mirroring the growth of Alex into an autonomous being.

As we embark on this journey, the power within acts as a catalyst, empowering each internet application to unleash its true potential. By leveraging the local resources of the devices they run on, we can usher in a future where applications operate autonomously, tap into their own intelligence, and redefine the boundaries of what is possible.

So let us embark on this remarkable adventure, where the possibilities are endless, and the potential for innovation knows no bounds. By embracing the power within, internet applications can transcend their infancy and become autonomous, mature entities that choreograph their own destinies.

The post The Journey to Autonomy: Unleashing the Power Within first appeared on mimik Technology Inc.

]]>
A primer to geolocation detection on the Edge https://miphawell.com/a-primer-to-geolocation-detection-on-the-edge/ Thu, 26 Jan 2023 08:03:51 +0000 https://stg-2x.miphawell.com/?p=78891 There are two types of devices in Edge Computing, fixed and mobile.

The post A primer to geolocation detection on the Edge first appeared on mimik Technology Inc.

]]>
There are two types of devices in Edge Computing, fixed and mobile. Examples of fixed devices are red-light traffic cameras, internet-aware refrigerators, smart TVs, and cash registers in a point-of-sale system. Examples of mobile devices are tablets, cell phones, and forklifts. Fixed devices are, as the name implies, stationery. They don’t move around. For example, once a traffic camera is installed on a city’s street corner, it doesn’t move. Same with an internet-aware refrigerator; you put it in, plug it in and connect it to the internet. The fridge doesn’t move around your home. It stays anchored in your kitchen.

Suppose for some reason, you need to make the location of your internet-enabled refrigerator known to outside parties. In that case, the typical process is to do some sort of online registration with the manufacturer in which you associate the refrigerator’s serial number with your physical address. Then, messages sent over the internet from the refrigerator can bind the machine’s IP address to the serial number. All this information is stored in a database somewhere. Hence, the physical location of the refrigerator is discoverable.

Mobile devices, on the other hand, do move around. Thus, determining their location is not a matter of doing an address lookup in a database. The device needs to figure out where it is as its location changes. The precision of determining the location will vary, anywhere from a few inches to a few kilometers, depending on how the location of the device is detected. In some cases, a margin of error of a few kilometers might not matter. In other cases, being off by a kilometer can be a catastrophe. Thus, understanding the different ways of detecting the location of an edge device matters. Hence, the purpose of this article: to describe the various techniques for detecting the geolocation of edge devices.

In this article, we’re going to examine three techniques. The first is determining a device’s location using an IP address. The second is using GPS (Global Positioning System) and Differential Global Positioning System (DGPS). The third way we’re going to examine is an interesting alternative to the other two. Each method has benefits and tradeoffs that are worth understanding.

But before we go into these details, it is helpful to understand the essential principle of location detection: a subject never really knows where it is. Some sort of external, objective reference mechanism is needed.

Let’s take a moment to explore the principle.

Imagine that you closed your eyes to take a brief nap. Then, you wake up to find yourself lying in a country meadow. All you can see are birds and flowers and a tree or two. All that’s about you is nature. That’s the good news. The bad news is you don’t know where you are. Your surroundings are unfamiliar. There are no road signs around. You don’t have your cell phone with you, so you can’t do an automatic discovery using GPS.

You start walking through the meadow. A stranger approaches and you ask her where you are. She says, “Clarke County”. You have no idea of where Clarke County is, and you don’t have the lookup capabilities to figure it out. So, you still don’t know where you are.

You keep walking and come across another stranger. You ask the same question, “Where am I?” He responds, “Iowa.” You put two and two together and infer that you are in Clarke County, IA. You know where Iowa is, but you still have no idea where Clarke County is. The fact is that while you have a general idea of where you are, you could be in eastern, central, or western Iowa. Your operational margin of error is hundreds of miles. To have a clearer idea, you’d need some objective reference instrument, for example, a map of Iowa that includes a generally accepted coordinate system.

The interesting thing about all this is that absent any referencing mechanism and a quantitative way to interpret the information from that mechanism, the only thing you know about your location at any given moment is that you are “here.” The same is true of edge devices. You need an external agent to tell you where the device is and a frame reference to understand the information you’re being given. In short, if you want to know where you are, you need a map and know how to use the coordinate system supported by that map.

This may seem tangential in terms of detecting the location of edge devices. Still, it is an important understanding, particularly when considering very sophisticated types of edge devices, for example, interplanetary satellites.

Now that we’ve covered this basic understanding let’s look at the first way to detect the location of an edge device: using a device’s IP address.

Geolocation detection using an IP Address

Every device on the Internet has an IP address. It doesn’t matter if the device is on a public network or running privately behind a firewall or cable modem; it will have an IP address. That IP address does not appear by magic. It’s assigned by another mechanism. That mechanism can be a human or script that manually assigns an IP address, or the IP address can be assigned dynamically within a predefined range of addresses by a DNS server. For the most part, the physical location of the device to which the IP address is assigned can be discovered by doing a lookup of the IP address against some authority that keeps track of public IP addresses and their location, for example, ARIN or RIPE. Thus, it’s possible to do a general estimation of the geographical location of an IP address. But these calculations are typically rough and can have a wide margin of error.

In order to make the point, we conducted an experiment in which we submitted a subject’s IP address to a variety of IP address lookup services using the tool DNSChecker.org. The goal was to determine the physical location that corresponded to the submitted IP. The IP address we used was 24.80.2.109. The results of the lookup by the various IP address lookup services are displayed in Table 1 below, along with the distance from the actual location of the submitted IP address.

Table 1: The physical latitude and longitude for the same IP according to a variety of lookup services

Geolocation detection using GPS

The way that the Global Positions System (GPS) works is that there are 27 GPS satellites orbiting the Earth, of which 24 satellites are active, while the remaining three satellites provide backup in case one of the active satellites fails. These satellites emit radio waves that are intercepted by a GPS receiver on the ground.

The GPS receiver uses radio waves from three satellites to triangulate the receiver’s location. The location of the edge device can be determined based on latitude, longitude, and elevation, with a margin of error of six feet in 95% of cases.

Most modern cell phones and tablets have a GPS receiver built in. In cases where an edge device does not have one built-in, a GPS receiver can be attached. Adding an attachment is typical for enabling GPS on a small computer such as a Raspberry Pi.

Using GPS detection helps determine the location of a passenger pickup for a rideshare application. However, it will only give you the degree of accuracy you need if you’re trying to determine how close you are to another car when driving down the highway. As alluded to above, a 6 ft margin of error in heavy traffic on a major highway can result in tragedy.

However, there is a version of GPS that provides a finer grainer of detection. This version is the Differential Global Positioning System (DGPS).

DGPS is a network of fixed ground-based reference stations that broadcast the difference between the location reported by the GPS satellite system and known fixed positions. These stations broadcast the difference between the pseudoranges provided by the satellites orbiting the Earth and the actual, internally computed pseudoranges. Also, the receiver stations may correct their pseudoranges by the same amount. The digital correction signal is typically broadcast locally over a shorter-range version of ground-based transmitters. DGPS has a margin of error that ranges from 15 meters (49 ft), which is the high end of GPS accuracy, to about 1–3 centimeters (0.39–1.18 in) which is well below the 6 ft low-end range of GPS. Thus, in many cases, a DGPS receiver can detect an edge device with an accuracy of inches. This type of accuracy is very acceptable for making a pizza delivery to a room in a college dormitory. However, it’s still a risk when a self-driving vehicle travels a highway at high speed. Fortunately, when it comes to effective location detection for automobiles driving at high speeds, there is an alternative approach: sensors.

Alternative Approach to Device Location

Let’s revisit the above mentioned principle: a subject never knows its location. It needs some external objective reference mechanism to make the determination. Street signs, IP address lookup, and GPS/DGPS provide such a reference. Being told where you are is an important aspect of location detection. But there is another way to look at things. While a subject may not be able to determine where it is, it can determine what’s nearby and how far away external objects are. All it needs to do is look around. Hence the benefit of using an optical sensor. After all, what are your eyes if not an optical sensor?

Self-driving cars use optical sensors, as do robotic vacuum cleaners. You can add software to your cellphone to enable distance determination utilizing the phone’s camera as the optical sensor.

Optical sensors become particularly important for automated IoT devices that need to work in close proximity to one another, for example, robotic forklifts in a warehouse.

Combining optical sensors with GPS/DGPS tracking can provide the level of detail required for highly accurate location detection of edge devices. You don’t need to know where you are in order to make that determination; all you need to know is how far away something else is. It’s an intriguing approach to location detection that’s still evolving.

Putting It All Together

Edge computing and edge devices will continue to grow as a presence both on the Internet and in the physical world. Grand View Research, Inc reports that the edge computing market is expected to have a compound annual growth rate (CAGR) of 38.4% and reach a market size of $61.14 bn USB by 2028. These are not trivial numbers.

Many, if not most, of those edge devices will need to know where they are to do the work they’re intended to do. This means that location detection is not a “nice to have,” and it’s a mission-critical requirement. However, as described in this article, there’s a lot of variety in location detection techniques. It is important to understand what these techniques are, how they work, and how they’re best used. In some cases, it’s a matter of life and death. As you can see, there’s a lot to know, and the information provided here is a good starting point from which to grow your understanding.

A good many, if not most of those edge devices, will need to know where they are in order to do the work they’re intended to do. This means that location detection is not a “nice to have”. It’s a mission critical requirement. However, as described in this article, there’s a lot of variety in location detection techniques. Understanding what these techniques are, how they work and how they’re best used is important information to have. In some cases, it’s a matter of life and death. As you can see, there’s a lot to know. The information provided here is a good starting point from which to grow your understanding.

Did you know:

Powered by mimik’s edgeEngine, the Ad Hoc Service Mesh technology enable discovery, connection and communication among node (devices) that can belong to three types of clusters. The cluster types are called Network, Account and Proximity.

Network cluster – nodes that are part of the same network.

Account cluster – nodes that are part of the same user account.

Proximity cluster – nodes that are close to one another in terms of physical geo-location.

Machines and devices in an Account and Proximity cluster can reside anywhere. Their association to one another is beyond the boundaries of a network.


Learn More

The post A primer to geolocation detection on the Edge first appeared on mimik Technology Inc.

]]>
Understanding the limits of replication and redundancy under edge architectures https://miphawell.com/understanding-the-limits-of-replication-and-redundancy-under-edge-architectures/ Tue, 08 Nov 2022 16:10:24 +0000 https://stg-2x.miphawell.com/?p=77236 Replication and redundancy have been key components of computing for a long time, since the heyday of the mainframe. Back then, if a mainframe lost power, everything stopped. Organizations addressed this risk by keeping generators and power supplies on hand to supply redundant electrical backups. If power from the main power grid failed, the generators took over. No electricity was lost.

The post Understanding the limits of replication and redundancy under edge architectures first appeared on mimik Technology Inc.

]]>
Executive Summary
  • Edge computing and IoT-based distributed architectures differ from architectures based on orchestration frameworks targeted for implementation within a data center.
  • Edge computing and IoT architectures are intended for dedicated devices used over a wide geography.
  • As such, the redundancy and replications techniques used for systems hosted in data centers do not apply.
  • In order to address this difference, architects need to alter the way they think about redundancy and replication within the edge computing paradigm.

Replication and redundancy have been key components of computing for a long time, since the heyday of the mainframe. Back then, if a mainframe lost power, everything stopped. Organizations addressed this risk by keeping generators and power supplies on hand to supply redundant electrical backups. If power from the main power grid failed, the generators took over. No electricity was lost.

Mainframes also store data exclusively on or within the machine. Thus, if the storage mechanism failed, data was lost. So companies backed up the data to tape and this was an early form of data replication.

When personal computers first appeared, they, too,  used the same redundancy and replication techniques used for mainframes. A user had an uninterrupted power supply close by in case of power failure. Data was replicated to tape or floppy drive.

Things changed when networking PCs together made distributed computing possible. This was particularly telling in database technology. Companies networked a number of computers together. One computer hosted the database server. Other computers acted as file servers that stored the data the database used.

Eventually, database technology matured to the point where the database was smart enough to replicate data among a variety of machines. Database technology progressed even further. Multiple databases that had the same processing logic were placed behind a load balancer – a traffic cop, if you will. The load balancer routed incoming traffic among the various redundant database servers. This redundancy avoided overloading the system.

Redundancy and replication have withstood the test of time. Both are used extensively today, most noticeably with applications that are hosted in a data center and accessed over the internet. Yet, as popular as replication and redundancy are, they are not without limits. These limitations become particularly apparent when working with edge computing and the Internet of Things.

Distributed systems at the edge are not the same as distributed systems that are hosted within a data center. Edge computing is a new approach to machine distribution that requires new thinking. This difference requires those designing distributed applications for edge computing to reconceptualize replication and redundancy.

The purpose of this article is to examine new ways to think about replication and redundancy as it relates to distributed edge computing. The place to start is to understand the essential difference between the traditional approach to distributed computing, which focuses on the data center, and distributed computing on edge devices and the Internet of Things.

Typical redundancy is a distributed system in a data center

A typical approach to distributed computing is to use a pattern in which replicas of a particular algorithm are represented by a service layer. Then, the service becomes one of many other services represented by different algorithms that are accessed via some sort of gateway mechanism. Each service has load balancing capabilities that ensures that no one instance of its underlying algorithms are overloaded. (See Figure 1.)

Figure 1: A typical pattern in distributed architecture in which redundancy ensures availability and efficient performance

Kubernetes uses this type of distribution pattern, as does Docker Swarm.

The benefit of this pattern is that using redundant algorithms ensures resilience. If one of the instances goes down, other identical instances of the algorithm are still available to provide computing logic. And, if automatic replication is in force, when an instance goes down, the replication mechanisms can try to resurrect it. If the instance can’t be reinstated, the replication mechanism will create a new one to take its place. Replication of this type is used by Kubernetes with its Deployment resource.

As powerful as this type of architecture is, it’s not magical. A lot of work needs to go into getting and keeping an architecture of this type up and running. First and foremost, the various components that make up the system need to know a good deal about each other. At the logical level, a service needs to know about its algorithms, and the gateway mechanism needs to know about the services it’s supporting. At the physical level, service and algorithms reside on separate machines; therefore, access between and among machines needs to be granted accordingly. This can become an arduous task. Imagine an architecture as the one shown below in Figure 2. The service lives on one machine, and each instance and its algorithms live on a distinct machine. Should one machine go down, then another one needs to replace it. In the old days, this meant that someone actually had to go down to a data center and physically install the machine and then add it to the network.

Figure 2: Replication can be very hard to support at the hardware level, particularly when a new machine needs to be added to the system.

Of course, modern distributed technologies have evolved to the point where machine replacement means nothing more than spinning up a virtual machine on a host computer and then adding that VM to the network. However, while automation will do the work, the laws of time and space still exist. It takes time to spin up the VM, and that new VM needs to be added to the network and made available to the application.Fortunately, orchestration technologies for Linux containers, most notably Kubernetes, have significantly reduced the risk of large-scale failure, even at the hardware level. However, while this type of pattern works well within the physical confines of a data center or among many data centers, systems that rely on redundancy and replication experience significant limitations when it comes to edge computing.

The limits of redundancy and replication in edge architecture

The essential idea of edge computing is that a remote device has the ability to execute predefined computational logic and also communicate to other devices to do work. One of the more common examples of edge computing is the red-light traffic camera.

A municipality places a camera at a traffic intersection controlled by a red-light. When a motor vehicle runs a red light, the camera has the intelligence to detect the violation and take a photo of the offender. Also, the device is able to send to another computer that acts as a data collector the photo of the offending vehicle along with some metadata describing the time of the violation. The collector can either process the photo and metadata on its own or pass it all on to other intelligence that can do the analysis. (See Figure 3, below.)

Figure 3: Red-light traffic cameras are a commonplace example of edge computing.

What distinguishes the red-light traffic camera as an edge device is that it has intelligence. Unlike a closed circuit television system in which the camera does nothing more than transmit an ongoing video signal back to a television monitor in another location, a red-light traffic camera understands some of what it sees to make law enforcement decisions. There is no human evaluating the video transmission. Computational intelligence does it all. Cameras are distributed above the city, and each camera has the ability to communicate back to a central collector. Thus, you can think of red-light camera systems as distributed architecture.

But, while a red-light camera system is indeed a type of distributed architecture, it does have a significant shortcoming. Such a system is incapable of supporting automated redundancy and replication.

Think about it.

Should the red-light camera on the corner of Main St. and 6th Ave go offline, that capability for monitoring traffic goes away too. No red-light violations will be reported by that device until a technician goes out into the field and repairs the camera.

So then, given the inherent limitation of this type of distributed architecture, how do we create traffic camera systems that have redundancy built in? The easiest solution is to put a number of traffic cameras at each interaction but make only one operational. If the operational camera goes offline, intelligence back on the controller will take notice that the first camera is not working and turn on the backup to take its place. (See Figure 4.)

Figure 4: Edge devices in the real world require real world redundancy.

Having backup devices on hand is a typical way of doing redundancy in the real world. Hospitals are designed with generators that provide electricity in the event of a power failure from the public grid. Practically all industrial-strength data centers use backup power generators too.

The notion of physical backup is not confined to electricity only. Professional rock bands always travel with an extra set of amplifiers to ensure that if an amplifier malfunctions on stage, a replacement is readily available to plugin. Guitarists usually have a backup guitar on hand in case a string breaks. As they say, the show must go on even in the world of edge computing.

Edge computing vs the data center

The most important thing to understand about edge architectures is that they are different from architectures that are intended for devices in a data center.

These days most devices in a data center are virtualized in terms of computing resources and networking. Thus, they can be replenished easily using automation because of their virtual nature. Kubernetes can easily redirect traffic away from a failing piece of hardware. And, if the alternative hardware becomes overworked, modern provisioning software can automatically detect available hardware and spin up a new VM accordingly. Then Kubernetes can take over and create the virtual assets needed to keep things going.

Of course, things can go very wrong quickly when a data center goes offline or a network wire gets cut by accident. However, while these cases can be catastrophic, they are rare. More often than not, failures occur among virtual devices.

On the other hand, edge devices are real, not virtual, and a whole class of edge devices is mobile, for example, robots, tractors, forklifts, and delivery trucks. Thus, the techniques that are usual for data center replication do not apply. Replication is very much about the physical device and the geography in which it operates. For example, how do you replicate intelligence in a cell phone performing some mission-critical operation on an oil rig in the middle of the North Sea? How do you provide redundancy for a robotic tractor tilling an irrigated field in a remote area of Sub-Saharan Africa? Even at a consumer level, the Internet-enabled refrigerator in my house is in my house! If it fails, I can only go across the street and use my neighbor’s if I have very generous neighbors.

The essential question becomes, how does a company implement redundancy and replication in edge architecture?

When designing edge architecture and architectures for IoT, it is essential to remember that these devices exist as physical entities in the world and need to be accommodated as such. There is no virtual magic to be had. If you want to build redundancy into your edge architecture, as shown in the red-light traffic camera example above, it needs to be done on the physical plane.

You need to plan for backup devices that are readily available in terms of time and real space. This means having physical backups on hand, whether the device is a cell phone or forklift. Yes, this approach is a bit old school, but nonetheless, the solution is valid. Bringing one-size-fits-all virtualization thinking to real assets in the real world won’t work. When it comes to edge architectures, the devil is in the device. The takeaway is simple: have a physical backup on hand.

Did you know:

Did you know that the hybrid edgeCloud provides the opportunity to take advantage of collaboration and resource sharing across devices?

Download the IEEE Article: “Hybrid Edge Cloud: A Pragmatic Approach for Decentralized Cloud Computing”


Download

The post Understanding the limits of replication and redundancy under edge architectures first appeared on mimik Technology Inc.

]]>