Some things I'm thinking about
I've been playing more with my mostly functional modelling and simulation language (kin), here are some ideas which I want to try and profile to see if they offer gains:
Carrying type and data as a tuple:
One-bit markers for escape detection:
Based on One-bit Reference Counting, a marker in the type field to distinguish 'shared' objects you can take a reference to, and 'non shared' which you must not.
As an extension to 'one bit reference counting' style; you maintain a high water mark instead of always copying, for use in structures such as VLists so you don't copy on passing automatically, and it allows you to mutate an array if it can be proven that all accesses happen before writes in matrix multiplications:
ax[i] <- ax[i] + row[i] * col[i]
Block order arrays for cache sensitive iteration and super compiled functors:
In a simulation, it's not uncommon to have a reduction on some function of a few attributes of an entity:
let total_force = reduce(array_of_entities, lambda(e:entity, ax) => f.mass * f.acceleration + ax, 0)
If this is over a homomorphic array, and mass and acceleration are simple double values, this would in C translate to picking 8 bytes here and there from the memory for the array. If instead each field is kept either in 'column order', or in what I'll call block order - so the fields for (cacheline size in bytes) objects are held contiguously. This should both reduce cache misses, and allow the use of SIMD instructions to process the data. The obvious disadvantage is that an object in an array is no longer the same layout as a single object on the heap, and to exploit it you need either a super-compiler or a trace-based JIT.
Effective model for scopes based on lists, symbols and objects:
Trying to build an interpreter in Java, which makes it tempting to use maps for everything, I found that the properties of an object and the properties of a lexical scope are much the same, (the duality of closures and objects is well known) so will try and define the binding model for values in kin using symbols, integers and lists only.
Using CAS for thread safe homogeneous VLists
Similar to the vector implementation in Lock-free Dynamically Resizable Arrays.
Options for compiling Verilog to shader language
Having had an interview last week as a systems modelling engineer with a company who were using C as the modelling language for timing simulations in embedded memory controllers, which is a bit like going for a job as a car mechanic and discovering that you're being quizzed on your wood-working skills, I was thinking about compiling an appropriate modelling language to something which executes efficiently. Though their particulary problem - timing conflicts - I would have though as having an analytic solution.
Something after UML
Speaking of Verilog, Verilog came into existance because even standardized schematic diagrams don't carry strong enough semantics and are not amenable to algebraic analysis, and graphical notations don't scale to large systems without hiding information.
Pi-calculus came into existance as Petri nets don't scale to large systems and are not amenable to algebraic analysis.
UML is very much in the standardised schematic mould, lacks formal semantics, and relies on hiding information to scale.
Often the hidden information in UML is very important - what appears to be a good design is not a good design if its simplicity is achieved by hiding most of the attributes and operations of a class, as often happens when a class gets multiple unrelated responsibilities. The notation allow you to hide details which should indicate that the design should be factored into smaller units, and in fact encourage such behaviour as a mechanism to scale up to showing many classes. For example, in Altova UModel:
You can customize the display of classes in your diagram to show or hide individual class properties and operations. ... This feature lets you simplify the diagram to focus on the properties and operations relevant to the task at hand.
If the class has features which are not relevant, then refactor it. Don't hide it so it doesn't make the documentation more complicated. The classes for well-factored systems have nothing to hide, but require more relations between the classes, which UML tools don't handle well (last time I checked none provided basic auto-routing or anything like good layout engines) and look more complicated, even though they are more simple - in the same way a dry stone wall is simpler than a minimalist plastered white apartment, but is visually more complex.
So what might come after UML then? What would not mitigate against refactoring, but allow a visual overview? What notation might give good analytic properties for software, rather than being a system schematic with loose semantics?
I don't know yet.
Bjarne Stroustrup on the Evolution of Languages
A few comments on this -
The illustrations in the interview don't appear related other than that they are of an IDE ("I might have a different opinion if a good IDE was universally available", queue Visual Studio, which implies that VS isn't a good IDE) and that the UML is illustrating multiple inheritance, but mechanism for dispatch in the paper is not based on multiple inheritance (at least as shown in the diagram).
It's an interesting paper though, and I would like open methods in a language.
I don't think that DSLs should be just extensions to general purpose languages. Mainly because that implies a specific execution model, and if you have a DSL then execution shouldn't be part of the domain model (unless it's something like Verilog, or the input to a program proof tool, in which case execution is a domain concept). DSLs should allow the modelling of a domain, and lose their benefit - abstraction away from execution - if they are tied to a programming language. If you have a model of the physical domain, then you can state 'F = m * a' and not have that imply that the values named by 'm' and 'a' are multiplied and stored in the location named 'F', as binding to C++ semantics would have it. Otherwise you need at least three models - 'F = m * a', 'm = F/a' and 'a = F/m' - to let you represent everything that the one entity in the domain represents. Which would create a mismatch between the language and the domain, eliminating the benefit of a DSL.
I've also been reading 'Programming Languages Pragmatics' and working on my model-based language kin a lot, and adapting the parser and code model of the modelling language I used to do a state model of some concurrent code to support kin. I was generating Java as a first cut to bootstrap a full compiler, but Java's generics aren't rich enough to support kin's parametric types, and without them it's not easier to generate Java than C. So I will go straight to a C back end, and generate an intermediate form which can be interpreted. It's more important for kin to be able to call C libraries such as the Intel Math Kernal Library than for Java libraries, and I don't really like the sort of compromises you have to make to fit in with Java's libraries, however nice the JVM's HotSpot and garbage collector are to have.
Andy Goldsworthy - Rivers and Tides
We got the DVD as a wedding gift, and watched it last night.
I like Andy Goldsworthy, both for his aesthetic and the humility that lets a work exist within an environment which makes it transient, or fluid.
Watching him create a work, sometimes it would fall. Often it was compromised to fit with the time scale imposed by the environment by tide or light.
At each fall, a discovery. More of the nature of the work is revealed; the more failures, the more effortless and whole the final work becomes.
I really like the idea of a flowing form moving through or around obstacles, adapting to what's there, constructed out of what is available, but maintaining its own identity, a recognised figure.
The hanging works, stalks randomly pinned together with thorns, which then form a coherence to mark a hole. A form emerging from chaos, using the chaos, constructed with the chaos.
The works accept that they are not static, but will be changed after the artist commits them to the environment.
There's something in programming - called the quality without a name - which is what software architecture is about. It isn't about perfection, or locking everything down to nuts and bolts, or about hiding the chaff and chaos under languages and frameworks, but about habitability and fluidity, and coherence. It's what the original pattern movement was about.
Is this work good to live with? Does it fit with the environment? Can it flow and adapt with the rivers and tides?
Note to self:
I added bootparamsd to my bubba (so I can increase RAM disk size), and now if it's plugged into the ADSL I can't get the internet.
Also, continuing errors on booting the Amilo A1650g - added noapic to the boot params and turned off fast boot in the bios. Still don't know why there's a IRQ conflict.