SlimLib long term project : basics of an unlimited object repository for .Net [ENG]


SlimLib is a class library focused on big memory applications development under DotNet. Big memory applications uses main memory (RAM) to process data sets. Data modifications are done in memory and are stored on disk while reads are mostly done in main memory. Hot data remain in RAM. Well designed, Big Memory applications can be few level of magnitude faster than ones using standard, in memory or embedded DBMS. They are different from in memory DBMS because they avoid separation between processing code and processed data by transparently maintains the object paradigm.

Today, many low cost servers have 8 to 32 Gigabytes of RAM (Random Access Memory, the main memory). A mid-range server often get from 64 to hundreds of Gigabytes of RAM. A lot of production databases are smaller. Many business databases could be managed entirely in RAM in the form of simple C# objects. Because data are POCO (plain old C# object), business code could be a lot simpler too. Big Memory approach make possible to do more job with fewer servers with a simpler code, which mean be more scalable both in terms of complexity and in term of throughput.

This is SlimLib’s main goal. SlimLib permit the creation, updating and storage of billions of immutable and persistent objects (instances of classes) in a transactional distributed repository. What make SlimLib new and unique is that objects are the data, not a temporary copy of the data like they usually are. Accessing this objects and their fields values is – in most cases – as fast as accessing to any native DotNet class instance.

The vision

The facts

The last decades, RAM price has fallen. The computer industry is gradually making the merge between working memory (RAM) and storage memory (hard disks). Database management systems, such as MS SQL, Oracle, MySQL, PostGresSQL, MongoDB or Redis, have been designed under technical constraints which are becoming less and less relevant. They still have advantages against Big Memory approach : they are generic, sometimes standardized (SQL), instances are independent from applications and are usually shared on a network and there is no delay for pre-loading in RAM. This separation between application and databases had strong benefits. But the drawbacks are real too :

  • A limited integration with programming languages despite sophisticated ORMs.
  • Strong programming constraints to manage mutations and concurrency.
  • Poor performances compared to pure in memory object processing.
  • They need connectors and transport layers that slow down applications.
  • Solid performances is achieved by programming the database engine itself with stored procedures, complex queries or extension assemblies.

Today, developers don’t pay attention to these limitations, they have lived with them forever. Benefits seems to be stronger than drawbacks, and performances looks good. Culture is formatted by decades of DBMS uses. There is technologies to reduce the impedance mismatch and strong networks infrastructure to make things stable and decently quick. If you work with Entity Framework, you have a good language integration, but you pay it on performance and memory consumption. If you need performances, you’ll have disseminated code (in database logic written in specific languages), a more complex design and less maintainable and expendable code. Get high performances is an hard work compared to in memory data processing. In the next decade, perception of this constraints may change. With the rise of SSD that permit a 3 GBps preload rate (5 minutes to read one terabyte of data), Big Memory approach could shin for complex, feature rich, scalable applications that process small and mid size data sets, up to terabytes.

A promising approach

Access to main memory is really fast. Latency and throughput are tremendous. On a modern up to date hardware, access to any data is around 1400 time faster in RAM than on a NVMe SSD. Comparison with a 10 ms query over a DBMS (local or shared on a network) is worse : RAM is around 150 000 time faster. Ratios may vary with configuration, of course, but globally you will found the same magnitude in difference between Big Memory applications and DBMS based ones. This is why RAM systems of any kind, even in Big Data, will develop rapidly in the years to come. That’s why all major DBMS vendor company have developed In Memory systems and so much money have been invested in a more robust Redis version.


A new way

SlimLib is a low tech lightweight technology adapted to server applications development that take a new way. In the DotNet world, Linq query language was a huge advancement. Many developers use it with objects collections (Linq to object). Inspired by the functional world, it will evolve to produce better business code. Combined with in memory approach like SlimLib one, code is simpler and applications can be a lot faster. You can enumerate hundreds millions records per seconds on a commodity server. It avoid the complexity of distributed systems and the need to deal with the black box of a separated data management system : start the applications, load data as pure objects and process it at lightspeed like any collection of in memory objects.

Today servers are powerful. Using SlimLib, scaling is only a matter of adding more RAM, more Cores and more storage space to few redundant servers. It is cheaper than writing a distributed application that gracefully manage local caches, distributed transactions, eventually consistent data, network overflow and partitioned registries.


With the generalization of FlashRam as disk and larger main memory at low cost, SlimLib Big Memory approach could become a alternative to develop any small to mid size database based server application. If you manage a set of hot data from few gigabytes to few hundreds gigabytes, SlimLib should be one of the easiest technology to work with, replacing or enhancing any MySQL, MS SQL Express, Redis or embedded database engines. You don’t have to learn and masterize another language and system than the one you already know which are C#, object oriented design and Linq.

The starting points

Transportation overhead problem

You can use a separated in memory DBMS or caching system on the same machine. The problem is that you need to transport, decode and encode your data in a costly process. The fact is that the business code cannot directly access to objects where they are stored. Most of the production business process take only a fraction of the total time needed to process each user event or user command. Most of the processing time is consumed to transport, load and store data. Compute a bill total amount with many rules is fast. Data access latencies make it slow. Maintains a coherent data state is difficult. Lazy loading is forbidden.

The garbage collector freezes problem

DotNet is great platform, perhaps the most productive and robust one. You can try to create a big repository of POCO (Plain Old C# Objects) in RAM. You have then to implement a persistence mechanism that have to be crash proof and guarantee the coherency of the data. But the biggest problem is that if you create collections composed of millions of objects with strings and other sub instances references, the Garbage Collector will freeze your process from seconds to minutes during memory collection and compaction. This is not compatible with production availability’s needs. Each time you create a new object, which is really fast, this will cause an deferred analysis to determine if he can be disposed or not. More you create instances, slower is your application : under stress tests, your system seems to be anemic while a significant power is used to release tons of intermediate objects and observe long life ones. Garbage Collector tuning is useless. The reality is that if you have a .Net process that takes few gigabytes of RAM, you are already in danger zone. Having a lot of RAM is useless with DotNet, or a real danger.

This is a major bottleneck.

The read / write ratio opportunity

Many today applications are more accessing data in read mode than in write mode. The ratio is usually one write for thousands reads or more. Rendering a page, find data, request a list view or compute analytics are mostly based on read data operations. We often transform stored raw data to object. A programming languages like C# do interact at optimal speed with objects (class instances) and structures that contains fields. Slow down write operations (create instances of classes and change fields values) to have faster access, read, send, write of objects can enhancement the overall system. It permit to push back the code complexity limits.

The multi-processing problem

Today mid range servers have 4 to 16 cores. This fantastic power is hard to exploit. Object oriented programming languages like C# still based on the old single mutation processing paradigm to manage needed data. Mutability is a strong problem which need critical code sections to avoid border effects, critical sections that are so large that they can avoid parallelization benefits too. If functional languages ease the parallelization of data processing, they generate lot of intermediate data structures that must be allocated on the stack to be efficiently processed. The computation result still be store in a long life on heap memory space, making the application fall in GC problem.


Taking care of this facts, SlimLib is defining global goals :

  • Avoid any transportation and transformations of the stored data to the object land, and from object land to data store, each time a access or mutation of the data occur, like it is done with any database system. Native objects must be the data.
  • Avoid the garbage collector freezes.
  • Exploit the read / write ratio as an opportunity to enhance globally the performances : slowing down mutations operations a little bit to strongly enhance any access operation.
  • Ease the writing of concurrent processing code.

To reach this goals, SlimLib enhance basics mechanism to push back major DotNet bottlenecks :

  1. It redefine the way classes instances and their fields are managed to permit to any DotNet application to allocate terabytes of in memory objects without garbage collector freezes. It do not need any custom runtime. It is written in pure portable C# code.
  2. It avoid all processing time dedicated to serialization and permit to have globally cloneable, comparable and optionally immutable classes instances. Objects can be stored, read, sent, compressed at a speed that no serialization technique – even the best – can approach.
  3. It ease multiprocessing programming. Each object is or can be fully immutable – himself with all his graph’s sub objects (internal strings, arrays) on the first level.
  4. It offer a real-life set of feature to both doing dictionary style database (key/value) and graph oriented relational database. It permit to insert, update, find and remove millions objects and objects relations per seconds, faster than .Net ConcurrentDictionnary<TKey,T>.
  5. In SlimLib code base, classes like ConcurrentQueue, Task, Monitor or HashTables are replaced by lock free or low contention, garbage free and cache friendly versions.
  6. It support on the fly disk read to limit memory footprint of large objects like blobs or files.


Native memory

SlimLib manage data “objects” in an unsafe native memory context. That kind of objects are not managed by the Garbage Collector. Because they are unsafe, they need attentions to do not corrupt the process memory and avoid memory leaks. C and C++ developer used to do that. But using SlimLib is not a brutal return to C or C++ memory management paradigm for all. SlimLib permit a mix between the managed world and the unmanaged one. The majority of the application long life data can be in the form of unmanaged memory blocks while processing intermediate objects can benefit from the Garbage Collector mechanisms. This compartments permits to take advantages of both worlds with a decent security level. SlimLib permit to maintains a strong coherency from the ground-up between this two worlds where, in most cases, unmanaged resources are driven by managed ones. Complexe unmanaged memory management is hidden. Memory management (allocation, release, collect) is done in a deterministic way.

Objects Ghost

In a normal class instance, string and array fields are reference to independently allocated objects. SlimLib introduce the concept of objects ghost – or “packed objects”. Ghost Fields are single block of memory that contains all fields of a given instance of a class : primitive type values fields, strings and arrays are managed in a contiguous single memory block from start to end of the object lifecycle. All fields values are read where they are, in the unmanaged memory space. In most situations, there is no copy or managed objects creation, except for strings – at this time there is no alternative. And if a local copy is done, his life is extremely short, GC collect it at light speed.


Advantages of ghosted objects :

  • Copying an object and all his fields (including arrays and strings), whatever they are, is simply a matter of allocation and copy. With a .Net object, you have to use a slower cloner method that copy every fields, strings and arrays individually.
  • Compare two objects is done with simple memory comparison. If the two objects are not the same size, you immediately know that they are not equals. With a .Net object you have to compare each of his fields, strings and arrays.
  • You can take “as this” the memory block of the object for persistence or sending it on the network : all strings, arrays and values are already in a serialized format. There is no more serialization process. Send, write and read objects is a lightspeed process.
  • You can temporarily compress the overall object at low cost.
  • The processor cache swap is lower because all the sub structures of the objects are in the same memory region, usually in the same memory page.
  • Make an object immutable is a lot simpler and doesn’t need any lock.
  • There is no overhead for value object field access, and a little one for strings or arrays. But it is more efficient to do small local computations than randomly access the memory.

Drawbacks and limitations :

  • SlimLib do not auto-serialize a complete object graph but only the first level of unmanaged fields. But the majority of databases schema are flat design where rows contains primitives, strings and arrays.
  • Modifying a variable size field need memory copy and often reallocation.
  • Get the value of strings fields should need creation of a new ephemeral short life string object.
  • The definition of the objects memory topology (fields alignments) and endianness is captured during persistence operations. If you want to load objects from a file on a platform that do not support alignments and / or endianness of the one which have generate it, you will face problems. Actually, majority of hardware will support x64 under .Net, even ARM architectures.

Relational repository

SlimLib implement a standard repository of “ghosted object”. The class to access to the fields in the ghost is an empty class instances, the “ghost object”. Each ghosted object had meta data : a type, a version of his type and an identifier. The standard object identifier is based on Guid. The repository is not the identifier generator. You can create any of this objects anywhere in the code, and manipulate it like a real POCO object : modify his properties, pass it as parameter, store it in collections, etc.

The repository had a table for each object type. Objects are grouped by type in this tables. The repository permits to establish relationship between objects. The relations permit to create list of objects like you do with joints. Relations are described by an unsigned short. For example, you can define relation “10” is “FollowerOf”, and establish a relation of this type between two objects. You can constrain relations to define 1-1, N-1, N-N relations. You can enumerate relations from one object in forward direction (X is “FollowerOf” of Y) or in backward direction (Y is “FollowerOf” of X, which mean “FollowedBy” by X). When you delete an object, all his relations are deleted : you cannot have invalid relations.

All modifications of the object tables (insert, delete, update of objects) and relations between objects can be done in transactions. All mutations are stored in a log file. When you have modified the repository you can commit it, which store all the transactions mutations on disk.


Real life benchmark

DBMS benchmarks often lie. For remoted high performance DBMS like Redis, benchmarks usually measure network performances more than the engine performances. For in memory systems like FASTER or LMDB, or any key/value in memory store, benchmarks measure the Equals(), GetHashCode() or serialization code efficiency. In most cases, the external code performances become more important than the DBMS ones. That’s why adding long key and super small single sized byte array in a DBMS to benchmark overall performance is something like a lie because real world performances will be fare away from this impressive results and depends of the performance of outer code.


This is where SlimLib is different. Data injected in the SlimLib persistence system are full featured single block of memory based “ghosted objects” : they don’t need any additional processing to be managed by the repository. SlimLib benchmarks are real world ones – complexe objects with various fields. Performances results figure the one you’ll get in real world production applications.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s