Products

Product overview

ZeroPoint provide a real time data compression technology that doubles memory capacity and bandwidth at radically higher power efficiency. The products are delivered as IP blocks that our customers integrate in their SoCs.
The computational density of integrated circuits is continuing to increase at a speed that is far from matched by the memory capacity and bandwidth. Our mission is to put unused resources to work. Our research shows that memory content is compressible (loss-less) with a factor of 2-3x. The benefit of our memory compression technology is up to 50% higher performance per watt of the system.
To accommodate the increasing demand for high performance, low latency security solutions, ZeroPoint has also developed an encryption/decryption IP core that can be provided stand alone or integrated with our compression IP.

ZeroPoint Products

Bandwidth Acceleration

The ZeroPoint Bandwidth IP core packages a novel and proprietary technology that accelerates the limited off-chip bandwidth of main memories through intelligent, real-time, general-purpose and on-the-fly memory data compression. The product benefit is significantly more main memory bandwidth at unmatched power efficiency.

Application

Server CPUs, Smart devices and Embedded systems all face the same challenge. The memory bandwidth is limiting the system scaling and the many cores and accelerators are fighting to serve their memory access requests. A wide range of data set from these different applications have been evaluated and they all verify that it is evident that bandwidth acceleration provides a very efficient and effective way to utilize the full memory potential.

Integration

ZeroPoint Bandwidth IP is integrated in the memory subsystem of the SoC, close to the memory controller so that it can intercept all the memory traffic to/from DRAM to compress and decompress the data on-the-fly. The effect of compression is transparent to the CPU/accelerator subsystem as well as to the operating system and applications. Similarly, the memory controller is also unaware that the transmitted/received memory data is compressed. In essence, data compression and decompression, compaction as well as addressing the compressed memory space are handled automatically, transparently and hardware-accelerated by the IP.

ZeroPoint Bandwidth IP is compatible with all DRAM technologies and supports standard interfaces such as AXI and CHI. Other proprietary interfaces can be supported upon request.
Certain features and sizes of ZeroPoint Bandwidth IP can be customized during the pre-silicon implementation, while post-silicon IP configuration for tuning the IP in the final system is eligible through the provided device driver.

Benefits

  • High performance and low latency main memory bandwidth acceleration 25% average, with peak of 50%
  • Unmatched power efficiency
  • Real-time compression, super-fast compaction and transparent memory management
  • Operating at main memory speed and throughput
  • Compatible to AXI4/CHI, both 128-b and 256-b bus interface
  • Intelligent real-time analysis and tuning of the IP Block

Performance

Compression ratio: 2-3x across diverse data sets
Bandwidth acceleration: 25-50%
Performance acceleration: 10-25%
Frequency: DDR4/DDR5 DRAM speed
IP area (@7nm): Starting at 0.4mm2 (@7nm TSMC)
Memory technologies supported: (LP)DDR4, (LP)DDR5, HBM

SuperRAM

The SuperRAM IP Core implements a hardware accelerator for zram compression and decompression. SuperRAM implements a ZeroPoint proprietary compression algorithm. SuperRAM is optimized for power efficiency, high throughput and high compression efficiency.

Applications

  • Smart devices: The product benefit is a faster user experience at unmatched power efficiency when the host processor is offloaded and the page swapping is hardware accelerated.
  • Servers: The product benefit is more system performance at less power when the host is off loaded with the SW-based compression. Operating system and hypervisor offload SW-based compression of swapped pages with a super-fast page compression technology and return more performance to the guest at unmatched power efficiency.

Integration

SuperRAM is integrated on the SoC as other hardware accelerators, as a master node on the SoC interconnect. Part of the integration includes a software driver so that the zram crypto-compress API sends a command to the SuperRAM accelerator when there is a software-triggered compression and decompression.

Benefits

  • High performance and low latency hardware accelerated zram/zswap at unmatched power efficiency
  • Off-loading CPU – More cycles to released to user work loads
  • Power efficiency – Less energy
  • Speed – Fast compression and low latency access
  • Several in-flight compression and decompression operation operating in parallel
  • Operating at main memory speed and throughput
  • Compatible to AXI4/CHI, both 128-b and 256-b bus interface
  • Intelligent real-time analysis and tuning of the IP Block

Performance

Compression ratio: 2-4x across diverse data sets
Compression throughput: 8GB/s
Decompression throughput: 10.5GB/s
Frequency: DDR4/DDR5 DRAM speed
IP area (@7nm): Starting at 0.4mm2 (@7nm TSMC)
Memory technologies supported: (LP)DDR4, (LP)DDR5, HBM

 

SphinX AES-XTS Security IP

SphinX is designed to accommodate the speed, latency and throughput requirements of computer systems main memory. The IP implements the standard (NIST FIPS 197) AES cipher in XTS mode (IEEE Std 1619-2018). The SphinX family of cores covers a scalable IP with 128b and 256b key support, allowing the designer to choose the most efficient and effective core that satisfies the latency and throughput requirements.

The design is fully synchronous and supports independent, non-blocking encryption/decryption at main memory speed. SphinX is available for immediate licensing.

Key Features

  • High Performance and Low Latency industry standard encryption / decryption
  • Independent non-blocking encryption and decryption channels
  • 128b and 256b keys supported
  • Supports AES-XTS mode, without Cipher Text Stealing (CTS)
  • No additional memory required
  • Key expansion included
  • Fully pipelined design, optimized for high throughput and low latency
  • Operating at main memory speed and throughput
  • Modular and scalable architecture to easily accommodate customer data rates

Function Description

SphinX is designed to accommodate the speed, latency and throughput requirements of high performance computer systems. This includes main memory and other high performance storage devices such as NvMe, SSD, Optane and PCIe connected devices. The IP implements the standard (NIST FIPS 197) AES cipher in XTS mode (IEEE Std 1619-2018). The IP is modular and can easily scale to higher throughput. The design is fully synchronous and supports independent, non-blocking encryption/decryption at main memory speed.

The IP support 128b and 256b keys and has an initialization mode and an operation mode. During the initialization mode the IP read the keys, expand them, and initiate the IP. The IP also support an optional bypass control.

Applications

  • Main memory (DDR4/DDR5) independent, non-blocking encryption/decryption
  • Hard drive (SATA, SAS, PCIe, NVMe and CXL) encryption/decryption compliant with the IEEE Std 1619-2018
  • Applications that require integration of encryption/decryption into the data path
  • Applications with high throughput, low latency and strong encryption requirements
  • Applications requiring FIPS-197 certified encryption/decryption algorithms 

References

IEEE Std 1619-2018, IEEE Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices https://standards.ieee.org/standard/1619-2018.html
NIST FIPS 197, Advanced Encryption Standard (AES) https://www.nist.gov/publications/advanced-encryption-standard-aes

Our Technology