Millets - Slices improvements

As indicated in earlier post Slice abstraction has proved to be a very robust addition in the framework.
To reiterate, a slice reference provides more fine-grained control for an already allocated (continuous) chunk of memory at user-space level. * Temporary: All such (slices) references are supposed (and proven) to exist only temporarily for some-time during the life-time of its parent, which also helps to do away ref-counting logic for such references during production/release build.

This Slice Api is then exposed by framework to build Data-structures upon , and since all such Slices are bound to memory and thread-safety checks, this could lead to proliferation of desired one-off data-structures, which i find to be very desirable property for big peformant codebases.
Rather than depending upon native data-structures, we instead strive to provide "memory safety as composable runtime abstraction". This seems to be quite mouthful, but i think it captures the minimal guarantees/ideas necessary to write fast, practical and concurrent applications in a any programming language. Runtime nature of such abstractions may even free the langugage developers to experiment with features like intuitive syntax, interesting control-flow, exotic primitives at compiler level !

Currently framework doesn't provide any managed pointer or Reference (in their general sense) like abstraction to make it easier to write (safe) dynamic and recursive data-structures. At their core, dynamic data-structures treat a (contiguous) chunk of (shared) memory as a structured Object/Class/Struct. Many ML family languages, Python store all data on the Heap by default, with corresponding pointer on the Stack.

Despite the actual internal mechanism for getting desired data through a variable/binding, each such call could be assumed to perform some (shared) memory access, hence it seems possible to leverage our existing memory safety framework for such a "managedPointer" abstraction. By (explicitly) transforming each field/data access to a range of memory, we could trap each field access, hence triggering safety-checks without any manual intervention.

Initial (limited) intention/imagination was just to be able to write (relatively) rigid data-structures like vectors/strings/sequences, and i couldn't visualize the potential benefit of such framework in context to dynamic data-structures.

To make it work, Slices abstraction was modified to allowing tagging underlying "chunk/buffer" with a desired tag/type. It allows us to re-interpret the underlying memory (temporarily) and seems to also complement the initial idea of maximum buffer reuse coupled with robust type-checking, and since thread-safety checks work directly at byte-level, independent of such semantic/type-information, it remains possible to detect any "concurrent" access by default.

Current signature for slice now looks like this:

proc slice*[T](
    x:UserRecord[T],
    offsetInBytes:Natural,
    len:Natural
    tag:typedesc,        # desired type-info for this particular slice/reference.
    writable:bool = true
):UserRecord[tag]=
...

For any Object like:

type 
    Test = object
        a:uint8
        b:uint16
        c:uint8

We could use the existing APIs to trigger safety-checks on each field access.

var x:UserRecord[Test] = allocRecord[Test](1)  # allocate enough-memory to hold Test.

# s_a can only store a single uint8 value
var s_a = x.slice(
    offsetInBytes = offsetOf(Test, a), 
    len = 1, 
    tag = typeof(x.a)
    )

# s_b can only store a single uint16 value
var s_b = x.slice(
    offsetInBytes = offsetOf(Test, a), 
    len = 1, 
    tag = typeof(x.a)
    ) 

# s_c can only store a single uint16 value
var s_c = x.slice(
    offsetInBytes = offsetOf(Test, a), 
    len = 1, 
    tag = typeof(x.a)
    )

s_a[0] = 8'u8
s_b[0] = 7'16
s_c[0] = 42'u8

Coupled with the fact, that slices are temporary (aka no ref counting cost), i think it would have potential for out-of-box parallelization of operations like "finding" some element in a linked-list. But we do need some extra meta-data for this convenience :)

type
    ManagedPointer[T] = object
        memory:ptr T
        ref_counter:ptr int
        ...

template `.`(obj: ManagedPointer[T], field_t:untyped):typed = 
    var s = slice(.., offsetOf(field_t), ..)   # new (temporary) slice 
    var result_t = s[0]                    # triggers memory checks.
    drop(s)                                # manually destroy/drop this at this instant.
    result_t                                   

template `.=`(obj: ManagedPointer[T], field_t:untyped, value_t:typed) = 
    var s = slice(.., offsetOf(field_t), ..)`
    s[0] = value_t
    drop(s)

There is still work pending, as UserRecord may be modified slightly to allow defering deallocations until tracing code finishes. Cycle detection and collection remains tricky especially for concurrent environments, a minimal implementation seems to be working although in very early stages.
There is still a lot more to think/handle about dynamic data-structures in a concurrent environment, i think it will also end up stress testing the earlier assumptions we have of our memory-safety framework.


If you have found this content useful, please consider Donating . It would help me pay my bills and create more of this.