Our Technology

What is Ziptilion™ and how does it work?

Existing software-based memory compression technologies such as ZSWAP/ZRAM is today available to compress a fraction of the memory in order to reduce page swap out. Different to such solutions, our low-latency compression block can be used to compress the whole memory and does therefore offer significantly higher potential to expand memory. In addition to existing solutions, we also offer the option to physically extend the memory adress space which further reduces latency to access the expanded memory.

Our solution compress and decompress on a cache line granularity to avoid compression/decompression overhead when handling memory access without clear patterns. The approach support page deduplication which can be handled synergistically by our IP block. Our approach is compatible with ECC and with memory encryption functionality when handled in the memory controller. To get a good understanding of our technology we suggest that you read our Whitepaper.

The Ziptilion™ Technology​

The Challenge

Technology continues to grow in complexity and computer applications are becoming increasingly data intensive, requiring both real-time and off-line manipulation of massive amounts of data. More and more programs and applications need to be run at the same time and huge and growing memory capacity is needed. Meanwhile, technology users continue to expect high (and even improved) speed and performance.

The fundamental performance and speed constraint on almost all computing systems is the memory system. The main memory, also known as the RAM (Random Access Memory), has a direct impact on how many programs can be executed at the same time and how much data is readily available to them. Limitations to effective memory can significantly hamper or even prevent the use of some advanced applications.

In the past, computer industry manufacturers have temporarily overcome processing speed limitations by reducing the circuit size, changing to parallel processor design as well as stacking memory chips in 3D. However, as transistor sizes approach the size of atoms, the physical limits of shrinking circuits are being reached and processors cannot be stacked without overheating. Manufacturing costs are becoming prohibitive, as the design and production process gets more complex.

The Memory Wall
Main Memory Compression

Our innovation

Ziptilion™ is a patented compression/decompression technology that is capable of exploiting the low information entropy in computer memory to store and transport information as densely as possible, doubling memory capacity and bandwidth and potentially the speed and performance as well.

It can be installed in the System on Chips (SoC) of servers, smartphones, tablets, computers, including high-performance computers, and all sorts of connected (IoT) devices .

Doubling memory capacity and bandwidth leaves more space for software applications to utilize main memory, allowing data to be read faster by programs and for more programs to be executed at the same time, increasing the speed of the whole computer system.

Server manufacturers who integrate our technology will potentially realize impressive cost savings, up to 25% of total hardware costs. Furthermore, as a result of processing efficiencies, our solution will potentially halve current energy expenditure for DRAM memories.

The Solution

Ziptilion™ is an innovative data-compression technology that consists of a silicon memory IP block on a chip. It exploits the memory resources of computing devices more efficiently than any other solution existing in the market.

Ziptilion™’s potential is achieved by integrating accelerated compression and decompression on the data path between the processor and the main memory .

Lossless Data Compression

Figure 1 shows the problem. Here, the memory is full of data. It is shown with white data between the memory controller and main memory. The memory bus is saturated, so processing capacity goes unused. Physical memory must be added to increase memory capacity.

The Memory Bottleneck
Figure 1

Figure 2 shows the solution : Our compression IP block is placed near the memory controller.Solution value:

  • More data can be stored in memory
  • More data can be transferred on the memory bus before saturation.
  • Increased computational performance, and/or
  • Reduces need for additional expensive physical memory.

Ziptilion™ manages the compressed memory by sampling and analysing data content and configuring compression and decompression accelerators for optimal results. It does this transparently to the system software , i.e. Ziptilion™ does not communicate with application software nor demand operating system changes so it can be installed in all computing systems , independently of the software they use.

Ziptilion™ is installed in the SoC in the same way as other existing IP blocks and different architectural integrations are possible. This allows software and external connections to work normally but the available memory capacity for applications and storage is increased two-fold.

Compression and decompression is data lossless, with ultra low latency, done in just a few nanoseconds.

Lossless Memory Compression
Figure 2

The Benefits of Ziptilion™

Increased capacity

Our compression algorithms and memory management approach typically offer a 2-3x memory expansion, depending on application and data.

Main Memory Capacity
Main Memory Bandwidth


In memory applications with for example DDR4 DRAM or faster memories, speed is not simply a nice benefit among others; If a compression scheme is not extremely fast then it can’t be used for memory applications regardless how nice the compression results are. Our IP block offers a scalable solution which can handle bandwidths of 20-40 GB/s per IP block or more.

A significant benefit of memory compression is the potential to reduce average memory access time since compressed memory data often will contain multiple cache lines when retrieved from memory.

Non-invasive to operating system and applications

The memory management software is designed to be compatible with common Linux distributions and to work transparently. Without need for modifications to operating system or applications.

Transparent Memory Management
Lossless Data Compression

Intelligent compression

The added memory benefits of increased capacity and bandwidth rely on steady and high compression performance of data memory. Memory (e.g., DRAM) data offers a fair amount of challenges. Since memory data changes character often depending on the set of currently used applications, it is important that the compression algorithms can dynamically monitor metadata and make intelligent and cost-benefit aware decisions on when the data in memory suffers from suboptimal compression and needs to be recompressed. It is also highly beneficial with algorithms intelligent enough to classify the data type currently being processed without added delay or latency, in order to pick the best performing compression scheme for a particular data type.

We combine proven statistical compression techniques with our own innovative patent protected ideas to make truly intelligent decision-making compression engines that work reliable and with high performance for the challenging working environment that memory data offers.

Up to 25% reduction of hardware cost

Server manufacturers who integrate our technology will potentially realize impressive cost savings, up to 25% of total hardware costs.

50% More Performance Per Watt
50% More Performance Per Watt

Up to 50% less energy consumption in servers

Furthermore, as a result of processing efficiencies, our solution will potentially halve current energy expenditure for DRAM memories. Typical DRAM memories have a power consumption of around 350-450 mW (milliwatts) per GB. For larger servers with 1 TB of memory this can require as much as 400 W per server. Due to our compression IP, a server user will be able to reduce the amount of physical memory capacity by up to 40-50%, reducing power consumption by as much as 160-200 W per server ( or 1,700 kWh per year).

Moreover, about 40% of the total energy in a datacenter is consumed in cooling the IT equipment so reducing server power will also proportionally reduce its cooling needs and thus the total energy consumption.

Secure Memory Technology

AES-128 XTS & AES-256 XTS

Data confidentiallity must be addressed in main memory

Security requirements are is mandatory at all system levels and are no longer a nice to have, but a must have. So, confidentiallity must be addressed in main memory.

Cryptography adds latency, consumes memory bandwidth and capacity. Encryption combined with compression results in a highly optimized solution that reduces the impact of encryption on memory bandwidth and latency. The result is high speed and high throughput encryption of the working memory.