Most frequent questions and answers

We have in our research made sure to evaluate our technology on a wide range of applications and sets of data. Based on industry use cases (see SPEC 2017) we typically see a compression ratio (CR%) of 50%, which means that the original memory footprint will only occupy 50% of the original space.

Based on this compression ratio there will obviously be significantly less data transferred from the DRAM controller to the DRAM memory. Depending on the application and nature of the data we have seen up to 50% less traffic over the DRAM interface due to the compression ratio (CR%) achieved.

The Ziptilion™ technology add significant value to any customer making their own ASIC or FPGA SoC (System on a Chip). A SoC is a System based on one or more CPUs, a memory subsystem and a set of accelerators and most likely proprietary logic blocks depending on the nature of the customer and application the SoC is targeting.

The Ziptilion™ technology is based on two parts, one HW IP block that is placed on the ASIC or FPGA silicon, and one Device driver SW that is put on the host CPU.

Depending on the application, architecture and technology the figures will be quite different. However, for reference we have made a comparison that we are happy to share. Just get in touch with the team with your inquiry.

The maximum throughput of the Ziptilion™ technology is depending on a combination of design, process node, cell libraries and bus architecture.
As an example we have an implementation of the Ziptilion™ technology today in a 28nm TSMC process, and the AXI bus is running at 800MHz, which result in a 32 GB/s throughput.

The majority of the embedded uP systems today apply AXI-bus architecture, due to the significant penetration of ARM-based architectures. Ziptilion™ have a solid support for the AXI-bus architecture, but is not limited to the AXI-bus architecture. Ziptilion™ is agnostic to different on-chip bus architectures based on the flexible interface of the solution.

We believe strongly that our customers should follow the evaluation funnel.

Then we both make sure that you early understand and appreciate how the Ziptilion™ technology add value for your application and architecture. The evaluation funnel is based on 3 steps.

  1. Compression ratio analysis – you send us data (a memory dump based on our specification) and within a few days you will have the detailed data compression analysis report, that will clearly indicate the potential of compression of your data.
  2. Memory management emulation – we will have a data and architecture review meeting. And after that we are going to make simulations based on your data and system architecture, and within a couple of weeks you will have the detailed data compression and memory management analysis report, that will clearly detail the potential of compression and memory expansion of your data and system architecture.
  3. Architectural simulation – we will at this point configure the architectural simulator based on your application and data run GEM5 simulations. In cooperation we will within a few weeks know the compression and memory expansion that Ziptilion™ technology will achieve for your data and system architecture.

This picture detail the scenarios for Ziptilion™.


  1. When the request gets a hit in the buffer, the latency is 4 clock cycles. Typically there is a 50% hitrate due to the nature of the system and locality of the application.

  2. When the CPU request data from memory and there is a miss in the buffer. Then the address is translated and the request is sent to working memory. When  data returns, it is decompressed. This adds a total of 3+7 cycles to the working memory DRAM read request (in the order of 100 cycles)
  3. Worth to underline is  that. When a read is performed from working memory, 2 lines are returned instead of one. due to the nature of decompression returning more data than requested. That extra data is put int the buffer and is the reason for the 50% hitrate in the buffer. And that generates a very quick read request.

Our IP-block is placed on the memory access path and is transparent to the operating system and applications. The SW Device Driver is running on the host microprocessor.