October 5, 2023
Edge Systems
Embedded systems are specialized computing systems that are designed to perform dedicated functions or tasks within a larger system, often with real-time constraints. Unlike general-purpose computers that can run a wide range of applications and server-grade hardware, embedded systems are optimized to execute specific applications or functions on specialized hardware. Embedded solutions have many benefits around resource optimization and the ability to use specialized hardware. The line is beginning to blur between embedded and common-computing. With the growth of computing power into smaller devices, more workloads are being consolidated to run at the edge.
Edge is an ambiguous term that Mainsail defines as workloads outside a public cloud or private data center, getting close to the end user.
In fact, IBM states that “by 2025, 75% of enterprise data will be processed at the edge, compared to only 10% today.¹”
Edge computing is where the next generation of sensors, data, and services will emerge. Devices will continue to consolidate as much processing power imaginable to combine as many functions as possible on devices to reduce space weight and power & cost, known as SWaP-C. Mainsail recognizes that compute at the edge is going to increase exponentially every year with the inclusion of AI/ML and next-gen sensors.
Hypervisors are used to consolidate workloads and help optimize hardware, but there are oftentimes issues with a lift-and-shift approach to embedded workloads. Consolidating real-time operating systems (RTOS) on traditional virtualization platforms can pose challenges due to the unique requirements of real-time systems. Here are the main reasons:
- Predictable Response Times: One of the defining characteristics of an RTOS is its ability to respond to an external stimulus within a predictable time frame. Traditional virtualization can introduce unpredictable latencies because of the additional layer (the hypervisor) it introduces and adds contention for resources among various virtual machines (VMs).
- Overhead of Hypervisor: Traditional virtualization involves the use of a hypervisor to manage multiple VMs. This type of hypervisor introduces an overhead that can be problematic for real-time applications. Any delays caused by the hypervisor in scheduling VMs or in handling interrupts can lead to missed real-time deadlines.
- Resource Contention: In a virtualized environment, multiple VMs share the same physical resources, such as CPU, memory, and I/O. If two VMs simultaneously demand a resource, there can be contention which may lead to unpredictable behavior for a real-time task.
Many sensors, communications, controllers, and 5G are low-latency or real-time workloads that require guaranteed quality of service to work correctly, and most 5G workloads use real-time Linux kernels to provide the determinism needed to run these workloads. In most cases, these workloads will go down to bare metal in order to get the determinism needed.
Consolidating these types of low-latency and real-time workloads onto a hypervisor reduces SWaP-C and brings the operational and deployment benefits of virtualization.
“Metalvisor was designed with processor-based micro-segmentation to isolate workloads and meet the next-generation of requirements for Zero Trust in the DoD. Metalvisor truly is Zero Trust at the silicon level.” - Eric Van Arsdall
Metalvisor a TypeZero Hypervisor
Metalvisor is a unique, new type of hypervisor — a TypeZero hypervisor. Metalvisor combines components of the resource optimization and hardware-isolation features found in embedded systems and is bringing them to the world of common-computing. This enables the consolidation of real-time operating systems coinciding with standard workloads, also known as mixed-criticality. TypeZero hypervisors are unique compared to traditional virtualization like type 1 and 2 hypervisors.
Here are a few defining characteristics of TypeZero hypervisors;
- Position in the Stack: A TypeZero hypervisor is embedded directly into the hardware, with no distinct layer between itself and the physical system. It's integrated at the firmware or chip level.
- Performance: Since it's built directly into the hardware, TypeZero hypervisors offer optimal performance with minimal latency and overhead.
- Use Cases: TypeZero hypervisors are used in highly specialized scenarios where quick decisions and processing are paramount—e.g., real-time systems, specific IoT devices, or certain military applications.
- Features: Typically, these hypervisors support only essential functions and lack many of the orchestration capabilities found in Type 1 or Type 2 hypervisors. Their primary focus is speed and reliability. Newer TypeZero hypervisors support workload automation.
- Security: Since it's closer to the hardware with a more minimalistic design, it has a smaller attack surface, leading to improved security.
Metalvisor Security
Eric Van Arsdall, CEO | CoFounder Mainsail: The inception of Metalvisor was driven by a pressing need within the DOD and other government agencies to enhance the efficiency, security, and flexibility of their edge computing infrastructure. In an era where information superiority and rapid decision-making are paramount, traditional hypervisors simply fell short in meeting the rigorous demands of edge computing environments. Metalvisor was born out of a commitment to address the unique challenges faced by the DOD and other government agencies when it comes to edge computing. Our TypeZero hypervisor is a testament to innovation, security, and adaptability. It is not just a product; it's a solution that empowers our nation's defenders to operate effectively and securely at the edge, safeguarding our national interests.
At Mainsail, we are proud to stand shoulder to shoulder with the DOD in its mission to protect and defend, and we remain dedicated to pushing the boundaries of what's possible in the ever-evolving landscape of edge computing.
A significant differentiator is that Metalvisor is built with next-generation security to protect workloads holistically. Metalvisor starts by removing as much software code base as possible; less code means less of an attack surface for attackers to find vulnerabilities and exploit. Metalvisor has a trusted compute base of ~200k lines of code.
Metalvisor was custom-built to utilize security functions built into Intel CPUs and provide hardware isolation for workloads that have significant security and performance impacts. Because of the low-level hardware isolation, specific attacks like side-channel (Spectre, Meltdown) become harder to execute due to the shared-nothing architecture and defense-in-depth provided by Metalvisor.
Metalvisor is launched from the UEFI/Firmware layer to set up secure domains via hardware partitioning that Virtual Machines are launched into via LibVirt (a standard API for VMs). These domains are very unique from a security perspective. If a guest VM or workload is compromised, the domain prevents any lateral movement of an attacker because domains are set up from the firmware, beneath the operating system, and beneath the virtualization layer. This low-level partitioning even goes down to the CPU level for micro-segmentation, which is something the DoD is anticipating in the coming years for many software/hardware companies to achieve by 2030, according to the DoD CIO Zero Trust Guidebook. This is why we say,
Metalvisor has advanced security functions built-in to protect workloads with fundamental security architecture and data encryption (at-rest, in-transit, in-use), as well as always-on security protection via built-in agents.
- Zero Trust: Designed with processor-based Zero Trust at the silicon level. Hardware-based isolation & microsegmentation at the CPU level.
- Confidential Compute: Full memory encryption with unique encryption keys for each VM. No refactoring or additional software needed.
- Active Response Capabilities: Built-in to stop zero-days and other exploits/malware.
- Immutable Infrastructure: Prevent Unauthorized changes in Hardware & Software securing workloads with customer-owned encryption keys.
- Trust Compute Base: 200,000 lines of code all in & Secure BIOS
- Meet & Exceed: Zero Trust NIST 800-207
Metalvisor for Modern Workloads
Mainsail’s Metalvisor brings many low-level capabilities found in embedded systems to common compute, where modern workloads like Kubernetes can benefit from enhanced security and deterministic performance. Kubernetes & container based applications can be deployed via automation such as Ansible, Terraform, or your favorite Kubernetes deployment tool. Mainsail’s Metalvisor brings hardware-based isolation to Kubernetes, ensuring separation between workloads and high quality-of-service QoS.
- Determinism: This helps to run demanding edge workloads that require high determinism and quality of service, like 5G and AI/ML workloads.
- Confidential Compute: Metalvisor transparently encrypts memory so workloads can benefit from confidential compute and protect data in-use.
- No Virtualization Tax: Metalvisor removes the virtualization overhead and allows you to run Kubernetes workloads without worrying about degraded performance experienced with traditional virtualization.
- Bare Metal Performance: Run Kubernetes and Container workloads with the same profile as a bare-metal machine, but with the benefits of virtualization and Linux compatibility.
Metalvisor offers optimal performance with minimal latency, making it suitable for real-time systems. It also emphasizes next-generation security & encryption, reducing the attack surface by minimizing its software code base. Metalvisor ensures workload security against advanced threats.
Schedule a call to see a demo and find out more about how Metalvisor can help secure workloads.
Brad Sollar: CTO Mainsail | Army Veteran
- Mainsail Red Hat Whitepaper: https://www.mainsailindustries.com/resources