kin - an experiment in high performance scripting

Kin is the name for a dynamic language I've been toying with for some time. There's a SourceForge project, http://kin.sourceforge.net, which dates from when I was targeting the JVM. I may continue with that, or just put chunks on tincancamera.

The main aim of kin was to write engineering applications more easily. This means, in the tradition of smile (not sure of the name), the z/os scripting language used at BAE for tying together its Fortran stress routings, and more recent efforts such as SciPy (or arguably Swig as a meta-case), kin was intended to allow scripting of numeric libraries.

I'm getting interested again in kin as a base for experimenting with efficient interpreting of JavaScript (as I can't be bothered writing a language right now).

This has a few effects on the virtual machine-
  1. The target machines are what would be considered server or workstation class at BAE. Kin is to target processors with 1 GB RAM, 128bit SIMD architecture and 64 byte cache length. This is the spec. of my little laptop, the lowest machine that not a dog.
  2. Efficient FFI (foreign function interface). Swig may help for the boilerplate, but in general foreign_foo(foreign_bar()) should cost no more than it does in kin as in C.
  3. Traits and mixin based inheritance. A mixin transforms a class into a sub-class. Traits extract a set of locations that let the code generator efficiently access the slots of an object - if a function calls var a:int = foo.x and foo.bar(a + 4) then foo has traits {a:fix_int, bar:((integer)):*}, so the offsets to these slots in the first encountered foo value may be cached and the calls inlined, and any further objects mapped to these offsets. Mixins are functions which add traits to a type, and allow either class or prototype based inheritance. If each object slot table is a form of dynamically extended hash and mixins are arranged to be monotonic over the index of the slots in the hash, then usually the offsets of the slots referenced by the traits inferred by the use of an object in an invocation should be stable. Or that's one of the things kin will be attempting to investigate. And if they're stable, they can be hoisted out of loops too.
  4. Escape analysis. For both performance and concurrency reasons.
  5. To support type inference and relational programming, a simple prolog core. Which also will be available at the programming level so kin can use it for declarative coding.
  6. Something on concurrency. I need to grok Erlang properly to have an opinion on message passing vs shared memory, but if using shared memory, then at least supporting transactions simply so an object is only released to other threads in a consistent state. If escape analysis shows the state isn't shared, the transaction should be a no-op; otherwise there may need to be a copy and lock of the object's state - which again would use the analysis of what traits of an object a function exploits.
  7. SIMD array comprehensions, big_num, etc. The Java kin had a fast big_num for large numbers; the lower-level SSE operations should make it of the order of four times faster. The problem is getting the operations up to a level where I can write a full efficiency division algorithm without getting lost, which I never managed in Java.
So that's the sort of thing kin's a test bed for. The relation side used to be more stressed (it was functional + prolog + numerics, built on a small lisp compiler core), hence the name.



Post a Comment

<< Home