2007-09-05

More concurrent languages

Tonight I watched Advanced Topics in Programming Languages: Concurrency/message passing Newsqueak, which introduced the Newsqueak language which was developed into the Alef and Limbo concurrent languages in Plan 9 and Inferno.

Not much to add to it - it seems a little to little lambda-ey and not as cleanly calculus driven for a functional language, and it doesn't have curly braces for the real-world crowd. Maybe it was just old.

JavaScript with channels might be nice; the generator/trampoline threading library could be extended for that; a concurrent process with a blocking channel composed with a reader, and a generator function composed with a reader function are very similar.

What NewSqueak had was a strong tail-call bias; having small stacks is good for lightweight process or transducer composition languages. In the various talks and papers I've seen, I still haven't seen much on using mobile processes to eliminate locks - if you can group the execution of transducers into threads so those which interact are serialised correctly, then you don't need to acquire locks when communicating over channels. I'd really like to be working on that, rather than current work (mainly UI work as that's what needs doing on the project, with the implementation for access control in the XMPP server we're producing in the near future). The simulation system is based on a pub-sub engine, and each property is locked on read or write. Entities behaviours are triggered by a global tick event, which cascades through a graph of transducers configured at runtime. When the tick completes, state is published to subscribing entities, which can trigger further transducer cascades. The UI elements interact using the pub-sub and request/response messaging over XMPP. Multiple UIs can attach to the simulation server, and the server can use services on other machines via XMPP.

Currently, each transducer object holds its own state and pointers to the channels to other transducers, and write events are implemented as virtual function calls using inversion of control. There's no recursion - values are not stored on the stack and if you tail-call you exhaust the stack in 300 events. The main tick process works in a single thread. The system also uses a TCP library which launches a thread for each connection, so you can end up with one thread doing lots of work and hundreds waiting for input. Since the simulation engine builds on a generic thread safe pub-sub system, there are quite conservative locking strategies, and many of the reads or writes to channel require acquiring locks.

I'd like to move them all to an actor model style system, which integrates with the TCP select, so there are fewer threads doing more work, and without the current call-stack limits. Getting a graph of what can read the pub-sub channels and only using locking where they need serialising across threads (or locking them once for use by all actors active in the same thread). Unlike NewSqueak, the system uses asynchronous buffered channels for request/response semantics, so there would need to be a mechanism for posting channel events somewhere; currently there is a single queue with locking - that would need to be changed to queues per thread, and then a mechanism to allow the queues to be shared out at the next sync point. Some of the behaviours (such as discovering a service and then opening a channel to it) would need to be synced across threads always; these would block the actor until the next sync point.

If you're only syncing threads at specific points (where you also can redistribute actors between your threads), you can get rid of the OS mutexs on channels and use single bits in an array - you're only asking whether a process uses a channel in the next operations, and you only need to ask once it once. The actor can then wait if its channel usage changes, letting its thread process the other actors.

Maybe I should just give up and try to find an Erlang shop rather than having to reinvent it all in C++ for performance reasons.


TME

0 Comments:

Post a Comment

<< Home