Planet Smalltalk

July 26, 2016

Torsten Bergmann - Developing mixed Smalltalk/Java apps in Smalltalk/X

July 25, 2016

Torsten Bergmann - How to use git and GitHub with Pharo

Explained in a blog post here.

Torsten Bergmann - Pharo Kiosk system

Pharo 1.1 was used to build ATM-kind software in Russian Bank. Such devices could be found in Moscow streets. Here is a sample video:


July 24, 2016

Pharo Weekly - Navigating Objects


This is side project of my work on Seamess and RemoteDebuggingTools. You can load it by:

Gofer it
smalltalkhubUser: ‘Pharo’ project: ‘ObjectTravel’;
ObjectTravel is a tool to deeply traverse “native” references of given object through instance variables and “array contents”.
Usage is quite simple:

traveler := ObjectTraveler on: (1@2 corner: 3@4).
traveler referencesDo: [:eachRef | eachRed logCr].

Here is list of suitable methods:

  • collectReferences
  • countReferences
  • skip: anObject
  • traverseOnly: predicateBlock
  • copyObject
  • findAllPathsTo: targetObject
  • replaceCurrentReferenceWith: anObject
Best regards,

Pharo Weekly - Pillar 4.0.0 is out


I’m happy to announce the latest release of Pillar.

This release has been possible because of the hard work of Damien Cassou, Cyril Ferlicot, Yann Dubois, Thibault Arloing and Lukas Komarek.

What did it bring and what are the largest changes ?

  • Huge cleaning of code and Dependencies
  • Many bug fixes
  • Huge refactoring of internal parts
    • Extract phase management into an external project (LightPhaser)
    • Transformers and Phases are all Pipes

Remove Compilation cycle

  • Remove template handling from Pillar
  • Remove generation of
  • Pillar now exports files to JSON format
  • Command for pillar archetypes “./pillar archetype book”, book can be replaced by other archetype names (see Pillar documentation)
  • Possibility to load an archetype with a Makefile to compile pillar files


  • Check phase to check syntax
  • EPub exporter for e-books (use pillar archetypes for this)
  • Semantic links to Youtube and Wikipedia
  • Citations for LaTeX
  • Structures (see Pillar documentation)
  • Footnotes for HTML, Markdown, LaTeX and AsciiDoc
  • Improvement of parsing configuration failure message

Major changes

  • Metadata field in configuration to separate data from configuration properties
  • Support files in configuration does no longer exists
  • “disableTransformers” property is now named “disabledPhases”
  • AsciiDoc file extension is now “.asciidoc”
  • Pillar now manages one input file, not a collection of input files anymore
    • Parameter inputFiles is now replaced by inputFile

The documentation of Pillar will be updated as soon as possible to fit those changes.



Smalltalk Jobs - Smalltalk Jobs – 5/24/16

  • New York, NYSenior Kapital Application Developer (Job ID 160075688) at J.P. Morgan
  • Required Skills:
    • Bachelor’s degree or equivalent in Computer Science, Engineering (Any), Mathematics, or related field
    • 4 years of experience in Application Development, financial modeling and risk analysis, specifically around interest rate derivatives, or related experience; OR Master’s degree or equivalent in Computer Science, Engineering (Any), Mathematics, or related field
    • 2 years of experience in Application Development, financial modeling and risk analysis, specifically around interest rate derivatives, or related experience.
    • Demonstrated working knowledge of object oriented programming.
    • Demonstrated working knowledge of SmallTalk.
    • Experience with object databases.
    • Demonstrated working knowledge of PnL and risk management as it applies to interest rate derivatives.
    • Experience designing resilient and sustainable software solutions in support of trading desk’s needs.
    • Working knowledge of FpML as it relates to interest rate derivatives.
    • Experience following a defined development process with specific documentation requirements (e.g. SOX documentation requirements).
    • Demonstrated knowledge of linear algebra and numerical analysis methods as applicable to pricing fixed income derivatives.
  • Buenos Aires, ArgentinaKapital Financial Developer – Associate (Job ID 150122882) at J.P. Morgan
    • Required Skills:
      • Flexibility, a desire to learn
      • Excellent interpersonal skills, team player
      • Desire to learn an OO language
      • An interest in financial derivatives products
      • Excellent business analysis and/or project management skills
    • Wanted Skills:
      • Smalltalk
      • mathematical or computer science background
      • prior experience in a customer services or operate role
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

Pharo Weekly - A taste of bootstrap


as you maybe know, we are working on Pharo image bootstrap – the process that can generate an image from source codes and initialize it correctly. Because of practical reasons we do not bootstrap the standard image at once but we are trying to bootstrap a small headless kernel image and then load the rest of the system into it.
The good news is that we are successful in our effor. We are already able to produce well usable images as you can test here:
From the Pharo/Squeak point of view this image is very special because it doesn’t contain any object inherited from 80’s. Pharo lost its umbilical cord.
Notice that the initial display width is too narrow and and we still need a lot of work on the building process, but In the next weeks and months it will change a lot the Pharo development – especially as soon as it will be combined with Git support.
— Pavel

Torsten Bergmann - A taste of bootstrap

With Smalltalks image approach you save the whole state of your "object oriented world" to disk and continue later at the same point of execution where you left. Some objects are part of the standard image since the 1970s which makes Smalltalk images software artefacts that are maintained since a long time.

Other Smalltalks (like Amber) work without images and bootstrap right from the beginning (like Amber which is a Smalltalk running on top of JavaScript).

Nonetheless it also makes sense to bootstrap new images right from the start and (as I already reported) the Pharo community is working on that. Now there is yet another step done for this as you can read and now also try here.

Note that the boostrapped image is already in Spur format.

Torsten Bergmann - ObjectTravel

A tool to traverse object references for Pharo. Read more.

Torsten Bergmann - JSONWebToken for Pharo

JWT (JSONWebToken) is a token format suitable for authentication and authorization. Read about the Pharo implementation.

Torsten Bergmann - Freewill - GA framework for Pharo

Freewill is the name of the genetic algorithm framework for Pharo. Read more here or see it in action here.

Benoit St-Jean - Freewill in progress

Freewill was able to solve the Burma14 TSP problem for the first time tonight!

(Click on image to enlarge)

Freewill solving Burma14

Now, it’s just a matter of discovering why my distance calculations are a little bit different than those from the TSPLIB ones.  Either TSPLIB is using a different formula to calculate its distances (I’m using Haversine) or it’s using different ellipsoid data (I’m using those from WGS84).  But since everybody uses WGS84 nowadays and my distance results are a little bit greater than those from TSPLIB, I suspect it’s using one of the Vincenty formulas.  That’s what I’ll investigate next!

Besides, while looking at the GeoSphere documentation, I was thinking it would be cool to port that stuff to Smalltalk!  Does anyone need this?!?!  Drop me a line if it’s the case!


Classé dans:Pharo, Smalltalk Tagged: distance, ellipsoid, geodesy, geographical, GeoSphere, Haversine, Pharo, Smalltalk, TSPLIB, Vincenty, WGS84

July 23, 2016

Pharo Weekly - JSONWebToken for Pharo


thanks to the inquiry of Sven I published an implementation of JSONWebToken to smalltalkhub. It is available at!/~NorbertHartl/JSONWebToken <!/~NorbertHartl/JSONWebToken>

For those who don't know JSONWebToken or short JWT pronounced "jot" is a token format suitable for authentication and authorization. The token consist of a header, a payload and a signature. The header defines crypto algorithms, compression and other things needed to read a token on reception. The payload is called a claim set which is basically a dictionary with well-known and custom keys. If we think about OAuth or OpenId the values contained map directly to JWT claims. For OpenID connect which is an identification mechanism on top of OAuth the usage of JWT is one of the building blocks. 

What are the advantages in using JWT?

- it defines a header for encoding the content so it is quite flexible in the ways compression and encryption of the key is done
- defines a payload which maps arbitrary keys and there is a set of well-known keys that implementations of OAuth, OpenID can understand
- defines a signature that makes it easy to trust the information contained or to give the token to someone who is not trusted
- token format is a single line string so it can be used e.g. in HTTP authentication headers

A problem JWT can solve:

In our company we have a lot of little REST servers serving some duties. To minimize the chaos I want to have a central authentication and authorization point. If we assume having 20 images running and we look at typical way how authorization works:

there is image A (Authentication), image S (Service) und client C. Client C wants to use the service S

1. client C authenticates and retrieves authorization information from A (or from S which redirects him to A)
2. client C hands out the authorization information to S
3. S needs to check at A if the information is valid (client C could have modified it or generated it)
4. S grants C access

Taking the assumption of having 20 service images, every image would need to get back to A in order to check authorization information. The more services images you have the more load it will put on A. In a JWT use case scenario the same would look like

1. client C authenticates and receives a JWT containing authorization information. The token is signed by A
2. client C hands out JWT to service S
3. S checks the signature of A and knows that the authorization information contained is valid. 
4. S grants C access



July 21, 2016

Pharo Weekly - New version of Seamless

I glad to finally release new version of Seamless (0.8.2).

It could be loaded by:
       Gofer it
      smalltalkhubUser: ‘Pharo’ project: ‘Seamless’;
It works in Pharo 5 and 6.
It is complete redesign of original version with the goal to make it more flexible, reliable and simple.
(original version was created by Nikolaos Papoulias)
Seamless is foundation for RemoteDebuggingTools. It allows reuse existing debugger to work with remote process by transparent network communication. Particularly new version provides required flexibility to reduce number of requests between distributed objects. No debugger changes was needed to make it work relatively fast with remote model.
For more details look at my blog and read docs: Seamless and Basys 
As usual feedback is welcome.
Best regards,

Torsten Bergmann - Remote Debugging Tools for Pharo

A first version of RemoteDebuggingTools project. It allows explore and debug remote images. Read more

July 20, 2016

Cincom Smalltalk - Smalltalk Digest: July Edition

The July edition of The Cincom Smalltalk Digest is available now. This edition of the Smalltalk Digest puts the spotlight on #ESUG16, and we look forward to seeing you there!

July 19, 2016

Pharo Weekly - OpenGLES2 bindings ready

Hi all,

My tiny binding for OpenGLES2 is ready :)!/~ThibaultRaffaillac/OpenGLES2/

It takes a different direction than that of NBOpenGL. I found the support for all versions of OpenGL overwhelming, both for beginners and to maintain. With a limited number of functions I could carefully name all messages.

A demo using either SDL2 or GLFW (faster) is included, supporting VSync and retina displays.
Tested only on Mac OSX, patches welcome!


Clément Béra - Sista: closed alpha version ??!


For the unaware developers, the sista runtime is a high-performance layer on top of the current Cog / Spur VM for Pharo / Squeak and other Smalltalk languages. It aims to reach greater performance by speculatively optimizing and deoptimizing code at runtime so frequently used code is faster to run.

The benchmarks are kind of getting stable on the sista runtime. Binary trees, Fibonnacci, Richards, integer benchmark, Nbody and Threadring are running with reliable performance, while chameneosRedux and DeltaBlue are still crashing very often. For development, I am now starting to use the sista runtime, though it’s still crashing every half an hour or so.   I think it’s time to share the code with new people. If you’re interested to contribute in the sista runtime, please contact me. You need as a requirement to be able to compile the VM with different settings and to be willing to contribute. I had multiple reviewers already, but new persons -especially, contributors, not reviewers- would be nice.

Integration status

The sista runtime relies both on full block closures and the new bytecode set.

I’ve been using the new bytecode set (SistaV1) for a couple weeks now and it looks quite stable, both from the debugging  and the execution points of view. There are a couple remaining bugs that Marcus Denker is fixing, such as a UI bug (when there are errorOnDraw in Morphic, sometimes the image freezes as it cannot correctly map the temporary name to the value…).

I’ve introduced even more recently FullBlockClosures, which are however quite buggy. I compile the VM with interpreter mode for the FullBlockClosure execution to avoid most issues, leading to incorrect benchmark comparison (usually optimized code has no closures, they’re inlined, so it’s faster than interpreted closures). Currently I compile only part of the image (typically only the benchmarks) with full block closures to avoid crashes. Code with old closures can’t be inlined.

On the sista runtime itself, the debugger conceptually works (each context with optimized code can be requested to be deoptimized at any interrupt point). However the debugger integration is not there yet at all. Many optimizations and deoptimizations are happening during the first couple minutes of development.

The current running benchmarks are still between 1.2 and 1.8x. The thing is that very few optimizations are in, and the focus is on integration and stabilization, not performance. There’s no doubt that the sista runtime is very far from its full performance potential. Right now we’re looking with Eliot into easy-to-implement optimizations such as store checks elimination and object allocation optimizations, that should bring some benchmarks over the 2x.

Optimizer architecture

As requested by Eliot, I try to give here an insight of the optimizing compiler. It’s difficult to give a full overview of the optimizer as there is no direct optimization flow. For example, the method to optimize is decompiled to the SSA-like IR (a.k.a. Scorch IR), but then each time a block or method is inlined the decompiler is called again. Another example is the elimination of dead branches, that usually happen early on to fold inlined constants, but it can happen again at later stage if some operation between SmallIntegers has been resolved due to ranges / inlined constants.

The Optimizer is called on a context. The first step is to search the best method to optimize on stack (usually the optimizer picks a method quite close to the context with the tripping counter, but it can be a little bit further to inline properly some closures into their outer contexts). The method chosen to be optimized is decompiled to the Scorch-IR and then optimized.


The decompiler transforms the bytecode in the Scorch IR. The Scorch IR is a control flow graph with a SSA-like property (I understood too late the Sea-of-node style IR, so the Scorch IR is not this way). I say SSA-like as it’s a high-level IR, so it’s different from LLVM-style IR (many instructions are not really typed, etc). I’ll describe it later in the post, but important points are:

  • It’s not stack-based as it was really not convenient for the SSA property and for optimizations.
  • At the exception of the basic blocks control flow, all the instructions are scheduled linearly inside a basic block, there is no tree, nested instructions or anything like that.

The decompiler is very similar to Cogit’s decompiler or the in-image bytecode to AST decompilers, it uses a simulated stack to construct expressions.

Inspired from an early version of V8’s Crankshaft optimizer, the conversion to SSA is done quite naively: phis nodes are added aggressively for each variable that could need it, and a second passes removes unnecessary phis. In practice it’s very efficient as Smalltalk methods and blocks have small control flows and small number of local variables, while the phis needs to be simplified one method/block at a time.

The control flow graph is slightly canonicalized during the decompilation process:

  • Critical edges are split (Similarly to LLVM, see here).
  • Loops are canonicalized so that a basic block having a predecessor through a back-edge can have only 2 predecessors, one through a back edge and one through a forward edge.
  • Returns are canonicalized so each method or block has a single return to caller (everything jumps to the same return), this is very convenient for inlining.

The last step of decompilation:

  • Computes the dominator tree (in a naive way, which works well as the control flow are simple and the dominator tree can be composed while inlining, but it would be interesting to see if the Tarjan-Lengauer algorithm performs better).
  • Reorders the basicBlocks in post order.  In addition to the post order property, the reordering keeps loop body contiguous, which changes sometimes the Smalltalk default control flow.
  • Removes redondant or unused phis.

These last steps require each to iterate over the control flow graph, but only over the basic blocks, not over each instruction.


There is no real order of optimization. Inlining is done mostly at the beginning, and passes that won’t trigger any more optimizations are done at the end.

Call graph inliner

The first phase is the CallGraphInliner. It basically iterates over every sends, starting by inner loop sends up to outer most sends. It inlines aggressively every send with reliable information (from inline caches or by inferring the types). It inlines:

  • message sends calling non primitive & quick primitive Smalltalk methods
  • closure activations if the closure is used only once (most closures).
  • “perform: aSelector” to a send if the selector is constant to cover the case “aCollection do: #foo” which is common in some frameworks.
  • primitives that relies only on type information (primitives corresponding to #bitAnd:, #class, #<, #size etc.).

After multiple attempts, I disabled inlining of blocks with non local returns, except if the block is created in the outer most method. It’s simpler for me and already most blocks can be inlined (3% of blocks have non local returns, and part of them can be inlined). One can fix that later.

Dead branch elimination

Dead branch elimination removes branches on boolean, which are frequents mostly because of the Smalltalk pattern “isSomething”, which once inlined often leads to “true ifTrue: ” or things like that. Dead branch elimination is sometimes called again from later pass when it seems it will remove additional branches.


ABCD inlines primitive operations and fold branches based on types and ranges. It’s based on this paper. This pass:

  • inlines primitives corresponding to #at:, #+, #-
  • removes dead branches when primitives matching #<, #> etc. can lead to a single branch due to the range information.

The pass temporarily transforms the IR in e-SSA (Addition of Pi nodes) as described in the ABCD paper and revert it back to SSA state at the end.

There is so many more primitives we could uncheck there. I’ve done the most important, I’ll do the others (SmallInteger multiplication, etc) later.


The last 2 main passes simplifies data computation. The first one, LICM (Loop Invariant Code Motion) attempts to move code up. It mainly moves traps out of loops. The second pass, GVN (Global value numbering) attempts to move code down and resolve statically some computation. It eliminates common sub expressions, solves primitives operations between constants, and all sorts of things like that.


The back-end is responsible for generating back bytecodes.

It works as follow:

  • ExpandAndReduce: that pass expands macro instructions, i.e., instructions that are present to simplify the IR but that can’t be generated to bytecode without being converted to multiple instructions. For example, trapIfNotInstanceOf are transformed to BranchIfNotInstanceOf to another basic block with immediate trap as the back-end can’t generate trapIfNotInstanceOf instructions. The pass also reduces other instructions. For example, it looks for returns and try to move them up in branch to avoid jumping to a return.
  • Spill analysis: that pass analyses the instructions to find out which instruction needs to be a temp, which one can be a spilled value on stack and which one is effect only for the stack-based bytecode. In general it tends to generate more temps than spills as it easier to handle, but sometimes spills are critical for performance as Cog’s JIT expects specific bytecode patterns to generate efficient instructions (typically JumpBelow instructions).
  • Liveness analysis: this pass computes both the liveness and interferences between the different variables that will be assigned to temps.
  • Temp index allocation: based on liveness analysis results, a phi coalescing and a graph coloring algorithm, it assigns temporary variable indexes to all the instructions that require a temporary slot.
  • Bytecode generation: this pass walks the graph and generates bytecodes. It also map the optimized code bytecode pc to the deoptimization metadata.

The Scorch IR

Let’s try to describe a bit the IR. It’s not perfect, but it works just fine. It’s a control flow graph of basic blocks, each instruction has the SSA property.

Let’s try to give an example with this method. I know it’s a simple example with no inlining but I can’t show everything at once:

  | array |
  array := #(1 2 3 4 5).
  1 to: array size do:
     [ :i | self nonInlinedEval: (array at: i) ]
 25 <20> pushConstant: #(1 2 3 4 5)
 26 <D0> popIntoTemp: 0
 27 <40> pushTemp: 0
 28 <72> send: size
 29 <D1> popIntoTemp: 1
 30 <51> pushConstant: 1
 31 <D2> popIntoTemp: 2
 32 <42> pushTemp: 2
 33 <41> pushTemp: 1
 34 <64> send: <=
 35 <EF 0E> jumpFalse: 51
 37 <4C> self
 38 <40> pushTemp: 0
 39 <42> pushTemp: 2
 40 <70> send: at:
 41 <91> send: nonInlinedEval:
 42 <D8> pop
 43 <42> pushTemp: 2
 44 <51> pushConstant: 1
 45 <60> send: +
 46 <D2> popIntoTemp: 2
 47 <E1 FF ED ED> jumpTo: 32
 51 <58> returnSelf


After decompilation, if I look at the send (self nonInlinedEval: (array at: i)), it looks like that:

[3.2] (S) self nonInlinedEval: [3.1].
  • [3.2]: it means this instructions is the instruction 2 of the basic block 3.
  • (S): it means this instruction has deoptimization information attached.
  • self: As the IR is very high level, specific instructions (self, argument read) are considered as immediate instruction in a similar way to constants. It means the receiver of the send is self.
  • nonInlinedEval: it’s the send’s selector.
  • [3.1]: it means that the first argument of the send is the result of the instruction [3.1].

Control flow graph

Let’s now look at the basic block 3.

 [3.1] (S) #(1 2 3 4 5) at: [2.1].
 [3.2] (S) self nonInlinedEval: [3.1].
 [3.3] (S) [2.1] + 1.
 [3.4] (S) backTo: 2.

The basic block is composed of 4 instructions. All the instructions but the last one are “body” instructions, by opposition to the last instruction which is a control flow instruction. Each basicBlock has always a control flow instruction at the end. In the case of basic block 3, the last instruction is a back jump. As you can see, every instruction requires deoptimization information.

Finally let’s look at the whole method:

 [1.1] (S) #(1 2 3 4 5) size .
 [1.2] (S) loopHead.
 [1.3] goTo: 2.

 [2.1] phi: 1'1 [3.3]'3 .
 [2.2] (S) [2.1] <= [1.1].
 [2.3] (S) [2.2] ifTrue: 3 ifFalse: 4.

 [3.1] (S) #(1 2 3 4 5) at: [2.1].
 [3.2] (S) self nonInlinedEval: [3.1].
 [3.3] (S) [2.1] + 1.
 [3.4] (S) backTo: 2.

 [4.1] ^ self.

At the exception of message sends and control flow instructions, we have:

  • a phi instruction (Look at what is SSA on google if you don’t understand what is a phi).
  • a loop head, which is there to record deoptimization information so the optimizer can hoist instructions such as traps out of loops.

This is the control flow graph. Every instruction is scheduled.

Optimized control flow graph

Let’s look at it again after running the optimization passes. The send nonInlinedEval: won’t be inlined because I forced it not to be inlined for this example. In practice this example would be created from the inlining of #do: on an array.

 [1.1] goTo: 2.

 [2.1] phi: [3.3]'3 1'1 .
 [2.2] [2.1] USmiLessOrEqual 5.
 [2.3] [2.2] ifTrue: 3 ifFalse: 4.

 [3.1] #(1 2 3 4 5) UPointerAt [2.1].
 [3.2] (S) self nonInlinedEval: [3.1].
 [3.3] [2.1] USmiAdd 1.
 [3.4] (S) backTo: 2.

 [4.1] ^ self.

After the optimizations, multiple operations have been unchecked and the size operation have been resolved at compilation time. The branch is always used on a boolean, so it does not require deoptimization information. In fact, only 2 instructions now require deoptimization information for bytecode generation, the send and the back-jump. I guess because the loop has a fixed small number of iteration I could remove the interrupt point on the loop, but I am not sure of the side-effects with the non inlined send, so I didn’t do it for now.

The instructions are shared between the control flow graph and the def-uses graph.

Def-use graph

Let’s look at the def-use graph of the instruction [2.2]. It looks like this:

[2.2] [2.1] USmiLessOrEqual 5. 
    [2.3] [2.2] ifTrue: 3 ifFalse: 4.

The instruction [2.2] is used only once, in instruction [2.3]. Instructions can be used in deoptimization information, but that’s not the case of  instruction [2.2].

Deoptimization information

Let’s look at the deoptimization information of the backjump. In this case, we didn’t inline aggressively and no object allocation was removed. Hence the deoptimization information is pretty simple.

The deoptimization information is a list of objects to reconstruct. The first object to reconstruct is the bottom context of the stack. In our case, a single object needs to be reconstructed, the context with non optimized method. Hence the inner collection looks like this:

an OrderedCollection(PSunkObj(Context;3343745))

A sunk object is an object which allocation has been postponed from runtime to deoptimization time. In our case, it’s a context, so it’s a Pointer sunk object (by opposition to byte sunk object and co). A “PSunkObj” describes the state of the object to reconstruct at a given point in the program. In this case, the object to reconstruct is a context.

This case is simple, but after inlining the optimizer needs currently to be able to recreate multiple contexts, temp vectors and closures; and maybe more in the future.

If we inspect the object, we can see that:

  • The class of the object to reconstruct is Context.
  • It has 7 fixed fields, which are reconstructed, in this case, using constants and the receiver of the optimized method.
  • It has 3 variable fields, which are reconstructed for the 2 first using constants and for the later using the value of the phi instruction.

I hope you got a feeling of how the IR looks like. I know it can be improved, and hopefully it will happen.

Next Steps

In term of integration, the focus needs to be on the new bytecode set polishing (it works almost entirely), on FullBlockClosure integration (compiler, VM, debugger) and lastly one needs to integrate better sista in the IDE to be sure that all the new methods installed are caught and that the debugger works just fine.

In term of optimizations, the next thing to do is to stabilize store check and immutability check removal, optimizations on object allocations, branches on values guaranteed to be boolean. The other thing to do is to work on splitting, as it seems to be critical in current methods in the benchmarks and in the IDE.

I hope you enjoyed the post.




Pharo Weekly - Metacello support for GitFileTree metadata-less mode

In the last month or so we’ve had a couple of different discussions on this list involving adding support to Metacello (otherwise known as “Cypress package extension”) for GitFileTree’s metadata-less mode[1] and [4] and earlier this week I released a new version of Metacello[2] that includes an updated version of the  “Cypress package extension”[5].

Depending on the version of Pharo you are using, at some point in time, I expect this new release of Metacello to be available in the standard download. Until then, to install the latest version of Metacello into Pharo execute the following in a workspace:

Metacello new
baseline: ‘Metacello’;
repository: ‘github://dalehenrich/metacello-work:master/repository’;
Metacello new
baseline: ‘Metacello’;
repository: ‘github://dalehenrich/metacello-work:master/repository’;
onConflict: [:ex | ex allow];

If you are using GitFileTree’s metadata-less mode and Metacello, then add the following method to your BaselineOf:

^ MetacelloCypressBaselineProject

and you are good to go.

If you are curious as to why “Cypress package extensions” are needed, you can read this comment[6] for a description of what rules Metacello uses when fetching/loading packages from a FileTree repository using a baseline.

On a slightly different topic, Alistair Grant ran into a bug involving Metacello and how it semantic version numbers[7] a couple months ago and this release includes a bug fix … the bug was related to the fact that the parser was too lenient and did throw an error for some forms of invalid Semantic version numbers … with this fix an error is thrown … of course it is entirely possible that there are ConfigurationOfs out in the wild that “depend upon the old behavior” so if you get an “invalid version number” error while working with a configuration and discover that it is not practical to redefine the version numbers to conform to the semantic version number format then you can add the following method to the ConfigurationOf and the old, buggy version of the parser will be used:

^ MetacelloOldSemanticVersionNumber

If you are sharing code between GemStone and Pharo, then you will want to make sure that you install the new version of Metacello in GemStone as well. See the GemStone release announcement for details[8].



Benoit St-Jean - Freewill in action

The very first image of Freewill (see here for details) in action, trying to solve a ruzzle problem as well as a TSP problem (Burma14) and a simple diophantine equation (Hermawanto)! Click on picture to enlarge!

First look at Freewill in action

Classé dans:Pharo, Smalltalk Tagged: Freewill, GA, genetic algorithms, Pharo, Smalltalk, TSP

Benoit St-Jean - What’s new?

What’s new?

After a major data loss (I haven’t given up on getting back all my data, mostly code repositories and databases!), I had to start all my pet projects from scratch. Luckily, it’s easier second time around as they say! And, lucky me, I store all my personal stuff on the web! So here’s a list of what’s coming up on this blog.


Even though I had a decent working version of the genetic algorithm program to find the best ruzzle grid (original posts in French here, here and here), I wasn’t satisfied with the code.  It slowly evolved from a bunch of code snippets into something I could somehow call a genetic algorithm.  Problem was that my solution was tailored for this specific problem only!  Since I lost all the Smalltalk code, I redid the whole thing from scratch : better design, simpler API, more flexible framework.  I can currently solve a TSP problem, the best ruzzle grid search and a diophantine equation.

I also plan to provide examples of the 8 queens problem, the knapsack problem, a quadratic equation problem, a resource-constrained problem and a simple bit-based example with the GA framework.  Besides, the are now more selection operators, more crossover operators, more termination detectors (as well as support for sets of termination criteria!), cleaner code and the list goes on!  So I’ll soon publish a GA framework for Pharo.

As most of you know, the Rush fan in me had to pick a project name in some way related to my favorite band!  So the framework will be called Freewill, for the lyrics in the song :

Each of us
A cell of awareness
Imperfect and incomplete
Genetic blends
With uncertain ends
On a fortune hunt that’s far too fleet


A stupid quest I’ll address after the first version of my GA framework is published.  It all started with a simple question related to the game of bingo (don’t ask!) : can we estimate the number of bingo cards sold in an event based on how many numbers it takes for each card configuration to have a winner?  So it’s just a matter of generating millions of draws and cards à la Monte Carlo and averaging how many numbers it takes for every configuration.  Why am I doing that?  Just because I’m curious!


There’s been a lot of action on the Pharo side and Glorp.  I plan on having a serious look at the latest Glorp/Pharo combo and even participate to the development!


I’ll translate my articles (in French here, here and here) on the SQL sudoku solver in English and test the whole thing on the latest MySQL server.  Besides, db4free has upgraded to a new MySQL server version!


I had done a port of NeoCSV to Dolphin right before losing all my code data.  Wasn’t hard to port so I’ll redo it as soon as I reinstall Dolphin!


It’s time to reinstall VisualAge, VisualWorks, Squeak, ObjectStudio and Dolphin and see what’s new in each environment!  From what I saw, there’s a lot of new and interesting stuff on the web side.  Add to that the fact that most social media platforms have had significant changes in their respective APIs recently, so there’s a lot to learn there!


That’s a wrap folks!

Classé dans:Dolphin, MySQL, ObjectStudio, Pharo, Smalltalk, SQL, Squeak, sudoku, VisualAge, VisualWorks Tagged: 8 queens, bingo, Chess, crossover, CSV, database, diophantine equation, Dolphin, eight queens, framework, Freewill, GA, genetic algorithm, genetic blends, glorp, knapsack, mathematics, Monte Carlo, music, MySQL, NeoCSV, ObjectStudio, operators, ORM, Pharo, quadratic equation, resource-constrained, Rush, Ruzzle, sélection, Smalltalk, SQL, Squeak, Sudoku, termination, Traveling Salesman, TSP, VisualAge, VisualWorks

Smalltalk Jobs - Smalltalk Jobs -7/18/16

  • St. Petersburg, FLSoftware/Controls Engineer II at Plasma-Therm LLC
    • Required Skills:
      • B.S. in Computer Science or Computer Engineering
      • Five to ten years software engineering experience.
      • C# or SmallTalk programming capabilities.
      • Graphical User Interface design/development (ease of use, ergonomics, etc.)
      • Model, View, Controller (MVC) software architectures
      • Understanding of development of multi-threaded applications
      • Controls and automation systems development
      • Distributed computing environment application development
      • Client/server programming concepts
      • Database implementation and usage concepts
      • Software engineering (all phases of software lifecycle)
      • Agile development model/environment
      • Networked Application development
      • Object Oriented application design and development
      • Reading/writing software requirements/specifications
      • Embedded control for automation and robotics
      • Knowledge of SCADA Systems
      • Software Development for Semiconductor Processing Equipment and SECS standards
      • Supporting end users (both capturing feature requests, as well as fixing deficiencies)
      • Supporting software for manufactured equipment
    • Wanted Skills:
      • Controls/Automation experience
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

July 18, 2016

Göran Krampe - Is Spry a Smalltalk?

I love Smalltalk and I have been in love with it since approximately 1994. I have used VisualWorks, VisualAge (IBM Smalltalk), Dolphin Smalltalk, GemStone, Squeak and Pharo quite a lot, and I was very active in the Squeak community for a long period.

But the last few years, finally, I have started to feel the "burn"... as in "Let's burn our disk packs!". And last year I started doing something about it - and the result is Spry. Spry is only at version 0.break-your-hd and several key parts are still missing, but its getting interesting already.

Now... is Spry a Smalltalk? And what would that even mean?

I think the reason I am writing this article is because I am feeling a slight frustration that not more people in the Smalltalk community find Spry interesting. :)

And sure, who am I to think Spry is anything remotely interesting... but I would have loved more interest. It may of course change when Spry starts being useful... or perhaps the lack of interest is because it's not "a Smalltalk"?

Smalltalk family

The Smalltalk family of languages has a fair bit of variation, for example Self is clearly in this family, although it doesn't even have classes, but it maintains a similar "feel" and shares several Smalltalk "values". There have been a lot of Smalltalks over the years, even at PARC they made different variants before releasing Smalltalk-80.

So... if we look at Spry, can it be considered a member of the Smalltalk family?

There is an ANSI standard of Smalltalk - but not many people care about it, except for some vendors perhaps. I should note however that Seaside apparently (I think) has brought around a certain focus on the ANSI standard since every Smalltalk implementation on earth wants to be able to run Seaside and Seaside tries to enforce relying on the ANSI standard (correct me if I am wrong).

Most Smalltalk implementations share a range of characteristics, and a lot of them also follow the ANSI standard, but they can still differ on pretty major points.

My personal take on things in Smalltalk that are pretty darn important and/or unique are:

  1. Everything is an object including meta levels
  2. A solid model for object oriented programming
  3. The image model
  4. 100% live system
  5. The browser based IDE with advanced cross referencing, workspaces and debuggers
  6. The keyword syntax and message cascades
  7. Message based execution model
  8. Dynamic typing and polymorphism
  9. Closures everywhere with lightweight syntax and non local return
  10. Very capable Collections and a good standard library

Not all Smalltalks cover all 10. For example, there are several Smalltalks without the image model and without a browser based IDE. Self and Slate and other prototypical derivatives don't have classes. Some Smalltalks have much less evolved class libraries for sure, and some are more shallow in the "turtle department".

In Spry we are deviating on a range of these points, but we are also definitely matching some of them!

How Spry stacks up

  1. Everything is an object including meta levels. No, in Spry everything is an AST node, not an object. A similar feel of uniformity exists, but it's different.
  2. A solid model for object oriented programming. Yes I think so, but Spry does not use the classic class model but is experimenting with a functional OO model.
  3. The image model. No, not yet. But the idea is to have it, not on a binary "memory snapshot" level, but in a more fine granular way.
  4. 100% live system. Yes, Spry is definitely 100% live and new code is created by running code etc.
  5. The browser based IDE with advanced cross referencing, workspaces and debuggers. No, but eventually I hope Spry gets something similar. First step is making a UI binding and evolving the meta programming mechanisms.
  6. The keyword syntax and message cascades. Yes, Spry has keyword syntax, but also has prefix syntax and currently no cascades nor statement separators.
  7. Message based execution model. No, Spry execution is not message based, but rather functional in nature. The practical difference should be slim to none I hope.
  8. Dynamic typing and polymorphism. Yes, Spry is dynamically typed and offers polymorphism, but through a different technique.
  9. Closures everywhere with lightweight syntax and non local return. Yes, closures with non local return, similar pervasiness, even more light weight syntax than Smalltalk!
  10. Very capable Collections and a good standard library. No, not yet. But intend to have it and will in many ways try to pick the best from Smalltalk and Nim.

So Spry scores 5/10. Not that shabby! And I am aiming for 3 more (#3, #5, #10) getting us up to 8/10. The two bullets that I can't really promise are #1 and #7, but I hope the alternative approach in Spry for these two bullets still reaches similar effects.

Let's look at #1, #2 and #6 in more detail. The other bullets can also be discussed, but ... not in this article :)

Everything is an object including meta levels

In Smalltalk everything is an object, there are no "fundamental datatypes". Every little thing is an instance of a class which makes the language clean and powerful. There are typically some things that the VM treats differently under the hood, like SmallInteger and BlockClosure etc, but the illusion is quite strong.

Spry on the other hand was born initially as a "Rebol incarnation" and evolved towards Smalltalk given my personal inclination. Rebol as well as Spry is homoiconic and when I started building Spry it felt very natural to simple let the AST be the fundamental "data is code and code is data" representation. This led to the atomic building block in Spry being the AST Node. So everything is an AST node (referred to as simply "node" hence on), but there are different kinds of nodes especially for various fundamental datatypes like string, int and float and they are explicitly implemented in the VM as "boxed" Nim types.

In Smalltalk objects imply that we can refer to them and pass them around, they have a life cycle and are garbage collected, they have an identity and they are instanciated from classes which describes what messages I can send to them.

In Spry the same things apply for nodes, except that they are not instanciated from classes. Instead nodes are either created by the parser through explicit syntax in the parse phase, or they are created during evaluation by cloning already existing ones.

An interesting aspect of Spry's approach is that we can easily create new kinds of nodes as extensions to the Spry VM. And these nodes can fall back on types in the Nim language that the VM is implemented in. This means we trivally can reuse the math libraries, string libraries and so on already available in Nim! In essence - the Spry VM and the Spry language is much more integrated with each other and since the VM is written in Nim, Nim and Spry live in symbiosis.

Using Spry it should be fully normal and easy to extend and compile your own Spry VM instead of having to use a downloaded binary VM or learning Black Magic in order to make a plugin to it, as it may feel in the Squeak/Pharo world.

Finally, just as with Smalltalk the meta level is represented and manipulated using the same abstractions as the language offers.

Conlusion? Spry is different but reaches something very similar in practice.

A solid model for object oriented programming

But what kind of behaviors are associated with a particular node then? In Spry I am experimenting with a model where all nodes can be tagged and these tags are the basis for polymorphism and dynamic function lookup. You can also avoid tagging and simply write regular functions and call them purely by name, making sure you feed them with the right kind of nodes as arguments, then we have a pure functional model with no dynamic dispatch being performed.

In Spry we have specific node types for the fundamental datatypes int, float, string and a few other things. But for "normal" objects that have instance variables we "model objects as Maps". JavaScript is similar, it has two fundamental composition types - the "array" and the "object" which works like a Map. In Spry we also have these two basic structures but we call them Block and Map. This means we can model an object using a Map, we don't declare instance variables - we just add them dynamically by name to the map.

But just being a Map doesn't make an object - because it doesn't have any behaviors associated with it! In Smalltalk objects know their class which is the basis for behavior dispatch and in Spry I am experimenting with opening up that attribute for more direct manipulation, a concept I call tags:

  1. Any node can be tagged with one or more tags.
  2. Functions are also nodes and can thus also be tagged.
  3. A polyfunc is a composite function with sub functions.
  4. A polyfunc selects which sub function to evaluate based on comparing tags for the first argument, the "receiver" with the tags for the sub functions.

The net effect of this is that we end up with a very flexible model of dispatch. This style of overloading is a tad similar to structural pattern matching in Erlang/Elixir.

One can easily mimic a class by associating a bunch of functions with a specific tag. The tags on a node have an ordering, this means we also get the inheritance effect where we can inherit a bunch of functions (by adding a tag for them) and then override a subset using another tag - by putting that tag first in the tag collection of the node. Granted this is all experimental and we will see how it plays out. It does however have a few interesting advantages over class based models:

  1. Tags are dynamic and can be added/removed/reordered during the life cycle of an object.
  2. Tags have no intrinsic relations to each other, thus multiple inheritance in various ways works fine.
  3. Polyfuncs are composed dynamically which makes it easy to extend existing modules with new behaviors (like class extensions in Smalltalk).

I am just starting to explore how this works, so the jury is still out.

The keyword syntax and message cascades

Spry supports infix and prefix functions and additionally keyword syntax using a simple parsing transformation. The following variants are available:


Function call with zero arguments.

Well, we are in fact referring to whatever is bound to the name "root"

and evaluating it - and if it is indeed a func then it will be called.

This happens to be a Spry primitive func that returns the Map holding the

root bindings, essentially the same as "Smalltalk" in Smalltalk.


Prefix function call with one argument.

echo "Hey"

Prefix function call with two arguments. I am experimenting with different

styles of conditionals in Spry, Smalltalk style is also doable.

if (3 < 4) [echo "yes"]

Infix function call with one argument.

[1 2 3] size

Infix function call with two arguments. In Spry this is currently not limited

to a specific subset of characters like binary messages in Smalltalk.

3 + 4

Infix function call with three arguments, keyword style.

Parser rewrites this as "[1] at:put: 0 2" and since ":" is a valid character

in a Spry name, it will simply run that func.

[1] at: 0 put: 2

Infix function calls with 3 or more arguments do not need to use keyword style though,

foo here could be an infix function taking 4 arguments. Not good style though.

1 foo 3 4 5

Keyword style can be used for prefix functions too so

that there is no receiver on the left! Looks funky for a Smalltalker and I

am not yet certain it is a good idea.

loadFile: "" ```

This means Spry supports the classic Smalltalk messge syntax (unary, binary, keyword) in addition to prefix syntax which sometimes is quite natural, like for echo. Currently there is no syntactic support for cascades, but I am not ruling out the ability to introduce something like it down the road.


Spry is very different from Smalltalk and I wouldn't call it "a Smalltalk", but rather "Smalltalk-ish". I hope Spry can open up new exciting programming patterns and abilities we haven't seen yet in Smalltalk country.

Hope you like it!

Pharo Weekly - Remote Debugger now available

I glad to release first version of RemoteDebuggingTools project. It allows explore and debug remote images.

Any feedback is welcome.
Best regards,

July 17, 2016

Torsten Bergmann - Magic with Pharo Reflectivity

Read this blog post from Denis Kudriashov on implementing an analogue of Dolphin Smalltalks ##() syntax