Planet Smalltalk

August 20, 2014

Torsten Bergmann - 2048 Competition

Torsten Bergmann - SqueakJS and Smalltalk 78

Bert is progressing with his SqueakJS project. A current version can be found here:

He can also run Smalltalk 78 on the Lively Kernel. The nice thing is all the VM code is fully accessible - you can also check the virtual machine while it is running.

If you want to try it yourself just open this page in your webbrowser:

Really interesting are also the details (for instance on GarbageCollection and on how to run one high level language on another high language.

Here are the videos from ESUG 2014 on that:

Torsten Bergmann - QCMagritte

QCMagritte is a framework on top of Seaside to develop applications. Here are the videos from ESUG 2014:

Torsten Bergmann - Fencing with Smalltalk

Anick Fron is talking about Fencing Software at ESUG 2014 (first written in Java, then rewritten in Smalltalk).

The webpage is

August 19, 2014

Cincom Smalltalk - The Cincom Smalltalk™ 2048 Brag and Swag Awards

Congratulations to the 2048 Winners!

Torsten Bergmann - Scratch activities in Japan

Torsten Bergmann - ESUG 2014 Videos

The first videos from ESUG 2014 appear on the net. Greetings to Cambridge.

Smalltalk Jobs - Smalltalk Jobs – 8/18/14

  • Alpharetta, GA (near Atlanta, GA) – VisualAge Smalltalk 6 Developer at OpenSpan
      Stephen Beckett, the Chief Scientist, (and the primary contact for this position,) at OpenSpan describes what they are doing in the following way:

      “…Our product injects software into target apps, figures out their object hierarchies, and presents a visual model to users in our IDE. Customers can then build automations between multiple apps on their desktop, such as when a button is pressed in the Smalltalk app, read these data fields and automatically push them into a webpage (or whatever). In a call center environment where Agents to many identical repetitive tasks across a large number of applications, we can take minutes off their calls while dramatically improving accuracy.

      Our challenge is we don’t know Smalltalk at all, and while we have reverse engineered many elements of the Virtual Machine and relating window handles to internal objects and can create the hierarchy of objects, we have not figured out how to handle events. We can’t find any single point to hook, and we have not been able to inject a Smalltalk object that could subscribe to an event. (In Java and .Net, we use our injection to hook in Java/.Net controls that then interact with their respective platforms, which is far easier than using hooks or cracking windows messages.)

      So I’ve using the following, but not necessarily with a lot of luck”:

    • SmallTalk internals (have not found anyone who has done anything with SmallTalk internals – doesn’t seem like a popular domain, compared to Java or .Net)
    • Experience with VM Api
    • Use of Primitive Feature to call code outside of the SmallTalk environment
    • Loading our own “IC” (Image Component) into a target application and having it communicate with the app using SmallTalk
      • Our injection gets us into Smalltalk right after NtDll is loaded, before anything else
      • When we called “LoadFileComponent” to try to load our SmallTalk component, it has failed every time.
    • Additional listings: Staffing Technologies, Pscs-us, Royak Group
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

August 18, 2014

Noury Bouraqadi - Talking to Robots with Pharo

Slides of my presentation given at ESUG 2014 conference are available online. It’s about Robot software development using the Pharo dynamic language. It includes a quick overview of PhaROS our bridge to the ROS, as well as BoTest our framework for TDD for robotics applications.

Esteban Lorenzano - Tide, the missing web framework

The slides of my talk about Tide at ESUG 2014 are now available at slideshare.

Pharo Weekly - Jasny bootstrap


The Bootstrap wrapping project for Seaside now has also support for the Jasni Bootstrap 
extension (see

Seaside Examples can be found on the following URL's

Hope this is useful.


August 17, 2014

Smalltalk Jobs - Smalltalk Jobs – 8/17/14

  • Mumbai, IndiaKapital Financial Developer (Job ID 140078559) at J.P. Morgan
    • Required Skills:
      • MCA/BTech(CS or IT)
      • Should have at least 4-5yrs of development experience in any Object Oriented language
      • Should know Smalltalk and have worked on any IDE VisualWorks/VisualAge/Dolphin etc for at least 2yrs
    • Wanted Skills:
      • Background of Investment banking
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

Benoit St-Jean - Bee Smalltalk : la pré-version!

Bee Smalltalk, dont j’avais parlé précédemment, est maintenant disponible en version pre-release et le code source est maintenant accessible à tous ici.

Classé dans:Bee Smalltalk, Smalltalk Tagged: Bee Smalltalk, code source, pré-version, pre-release, Smalltalk

Essence# - New Release Of Essence#: Nīsān (Alpha Build 22)

The Nīsān release introduces full support for ANSI-Standard Dates and Times into Essence#. It also fixes some important bugs.

Nīsān is the name of the first month of the eccelesiastical Hebrew Calendar (the name of the first month of the secular Hebrew Calendar is Tishri.) [We're still using a Biblical naming scheme, because we're still in alpha. However, we're (hopefully) only 2-3 releases away from going to beta, which will happen after we achieve full compliance with the ANSI Standard.]

In addition to what’s required by the ANSI Standard with respect to times, dates and durations of time, convenience methods were added to class Number that enable the creation of Durations by sending messages such as #days, #hours, #minutes, #seconds, #milliseconds and #microseconds to numbers.

As you may or may not be aware, I’m not only the author Essence#, I’m also the author of the Chronos Date/Time Library. In spite of that, I’ve added very little time/date functionality in this release that wasn’t either required by the ANSI Standard or provided by the relevant classes and methods in the .Net Base Class Library.

About 80% of the users of a programming language just don’t need anything more in the way of time/date support beyond what is required by the ANSI Standard. And those who do only need it about 20% of the time. So it doesn’t make good sense to include anything like the Chronos Date/Time Library in the “standard library” for any programming language: It’s overkill for most people, most of the time.

But it does make sense to include the Chronos Date/Time Library as an extension library. But the time for that is not yet.

Download Nīsān (Alpha Build 22)

To download the latest release, navigate to the DOWNLOADS page on CodePlex. There’s a download link in the upper left corner of the DOWNLOADS page, labeled Essence#_Nisan_Setup.exe. Using it will get you a program that will install the Nīsān release of Essence# to any location you choose on your computer. Please see the documentation page on the CodePlex site for more information on how to use Essence#, such as the installation instructions and instructions on how to run scripts written in Essence#.

The Nīsān release includes changes and additions to the Essence# Standard Library–which will be installed by the installer program attached to this release, or which may be obtained separately from GitHub.

One of the utility scripts that aid in developing Essence# code was changed in this release, and a bug in one of the example scripts was fixed. Other that that, there were no other changes to any of of the scripts, and no new scripts were added. For more information on the scripts, please see the documentation.

August 16, 2014

Benoit St-Jean - GT Inspector

On blâme souvent la communauté Smalltalk pour son manque d’imagination et son très mauvais marketing.  Avec un langage de programmation et un environnement de développement aussi puissants, comment se fait-il que Smalltalk peine à être utilisé en entreprise ?

Ce qui manque, bien souvent, c’est une pub sexy !

Comme celle du GT Inspector.  Wow!

August 15, 2014

Bee SmaRT - Pre-releasing Bee Smalltalk

Today marks a key milestone in Bee project: we are realizing Bee main sources and libraries to the public sphere! Do not get too excited yet. This is stated as a pre-release, because many things are yet missing and as the license is CC-BY-NC (for now). Also, there's no way to browse code like in a complete smalltalk environment, nor to rebuild the kernel.

You can consider this as just a small preview of what the system is going to be like internally. There are two kinds of files: SLLs are the binary smalltalk libraries, SMLs contain the source code of their respective libraries. bee.sll is the main executable (it is just an .exe file, you can rename and double-click). In case no arguments are passed, a *REALLY* basic command line interface is shown, where you can enter some Smalltalk code:

$> bee
Welcome to bee.
command line interface is waiting for you
> 3+4+5
> 3 class
> | o | o := Object new. o size


If you pass a file, like this:

$> bee

bee will load the nativizer and compiler and try to compile and execute the file sources. In linux (with wine) you can write a script like:

#!/usr/bin/wine bee.sll

Transcript show: 'hello bee'

You can also execute:

$> bee loadAndRunLibrary: <libname>

which will load the library (allowing to execute code without loading the compiler and nativizer). I think the most interesting thing to look for now are the implementations of the primitives. You can go to your favourite smalltalk to inspect

CompiledMethod allInstances select: [:cm | cm primitiveNumber > 0]

and then check in bee.sml how we implemented them (be sure to look at the bootstrapped version and not the Host VM one, we are writing both for now).

SLL files may or may not contain native code. In the case of the Kernel library (bee.sll), course native code is present (because you require some machine code to execute). Also, the JIT contains native code (to generate native code you also need a native native code generator).  Anything else doesn't require native code (but may contain it). For example, for now the compiler doesn't contain any native code. When loaded, the nativizer generates machine code just in time when methods are executed.

That's all for now, I'll be showing more details at ESUG, see you!

Cincom Smalltalk - Cincom Smalltalk Resolutions – July 2014

Click the title above to see July’s Resolutions for Cincom Smalltalk.

Pharo Weekly - New Woden Video


In a previous email I presented an early video of Woden, my new graphics engine written in Pharo. Here I have a new video in which I show some of the features of Woden:


With Alex, we started rewriting Roassal 3D using Woden, which can be seen in the second half of the video.

I will try to show many more demos in ESUG.

It also should be possible now to load Woden using the following script:

Gofer new smalltalkhubUser: ‘ronsaldo’ project: ‘Woden'; package: ‘ConfigurationOfWoden'; load. (Smalltalk at: #ConfigurationOfWoden) loadBleedingEdge

Later I will make a stable configuration version.


Pharo Weekly - Live Robot Programming

Hi all,

it’s with great joy that I can announce the project that my PhD student Miguel and I have been working on recently: Live Robot Programming, or LRP for short.

LRP is a live programming language designed for the creation of the behavior layer of robots. It is fundamentally a nested state machine language built with robotics applications in mind, but it is not bound to a specific robot middleware, API or OS. Have a look at one minute of LRP programming to get an idea of what it is like:

Live programming is fun, and live robot programming even more so, as it brings all the advantages of live programming to programming a robot. You get direct manipulation of a running robot, and that’s just cool beyond words. As an example of LRP on a robot, this guy was programmed in LRP: Note that you can use LRP ‘just’ for live programming nested state machines as well.

More information on LRP is available on its website: where you can also find download instructions. 

LRP is implemented in Pharo, and uses Roassal2 for the visualization of its state machines. We currently can steer the Lego Mindstorms EV3 and ROS robots, thanks to a small layer on top of the cool Pharo support that Jannik, Luc, Santiago and Noury are implementing at Douai. I am going to look into support for the Parrot AR.Drone 2.0esug in a few weeks. 

Miguel will be at ESUG next week (I cannot make it), and has a talk at the IWST workshop about LRP, in the morning session. I am sure that he will also be happy to give demos of LRP if you ask him to (but sadly without a robot).

All feedback is welcome, and … have fun!

Johan Fabry   –

August 14, 2014

Ricardo Moran - Controlling robots easily with Leap Motion

Hi, people!

I am writing to tell you that we have successfully added  the Leap Motion controller to Physical Etoys. We are working to add more features that the Leap has (specially the ones that come with the SDK 2.0).

While working, we decided to film some videos. Hope you like it :D

Have fun,






Noury Bouraqadi - explore package for ROS Indigo

A lot of changes have been made between ROS Groovy and Indigo. For now, there is no official version of explore package supporting ROS Indigo. So, here you can find a ROS Indigo compatible rosbuild explore package. This package is modified from its original version:

Pharo Weekly - HelloPharo: Smooth deployment of Pharo Web Apps


At Ta Mère, we are used to deploy Ruby/Rails application with Heroku or on VPS with Capistrano. Almost everybody uses the same tools and techniques in the Rails community so deployment is quite easy once you grasp the process.

The same process was quite frustrating with Pharo. To solve that, we’ve built HelloPharo. It is a tool to deploy small apps to a Linux VPS/VM.

It is heavily inspired by Capistrano, it prones convention over configuration and it wants to be full stack (e.g., serve the assets, restart the processes). It is built with Ansible.
We haven’t released a fixed version yet but the tool starts to be in a good-enough shape to be shown. We want to grab some feedback and fix the most obvious limitations (see the README for more) before releasing version 0.1.0.
If you or your company uses a well defined process to deploy pharo webapps, we are all ears. We think that having a canonical way to deploy simple apps is a must if we want to see wider Pharo adoption for small web companies. This process *must* be Unix friendly if we want to attract Python or Ruby people. Most of them are Devops anyway, the command line is their friend, NOT something they want to avoid.
Pull requests (for code or instructions in the README) are more than welcome. The code and the documentation are MIT licensed.


Torsten Bergmann - Voronoi scripted using Pharo

This video is about RTVoronyjBuilder for Roassal. More infos here.


Yoshiki Ohshima - [仕事] 最近読んだもの

”A 15 Year Perspective on Automatic Programming” by Robert Balzer Robert (Bob) Balzerは”history-playback debugger”とか”Dataless Programming”とかに関する先駆的で興味深いアイディアを多数打ち出 ...

August 13, 2014

Pharo Weekly - [ANN] TinyMCE for Seaside


if you want to use the TinyMCE Editor in your web application
with Seaside then you should/could use this one:

Project + Documentation!/~TorstenBergmann/TinyMCE

Simple Demo:

Nothing special, just a simple file library and an
example on how to use it. But maybe it safes others some


Essence# - Multiple Object Spaces In Essence#

What is an Object Space?

An object space is an object that encapsulates the execution context of an Essence# program. It is also responsible for initializing and hosting the Essence# run time system, including the dynamic binding subsystem that animates/reifies the meta-object protocol of Microsoft’s Dynamic Language Runtime (DLR.)

Any number of different object spaces may be active at the same time. Each one creates and encapsulates its own, independent execution context. The compiler and the library loader operate on and in a specific object space. Blocks and methods execute in the context of a specific object space. Essence# classes, traits and namespaces are bound to a specific object space. Even when a class, trait or namespace is defined in the same class library and the same containing namespace, they are independent and separate from any that might have the same qualified names that are bound to a different object space.

In spite of that, it is quite possible for an object bound to one object space to send messages to an object bound to a different object space. One way to do that would be to use the DLR’s hosting protocol. That’s because an Essence# object space is the Essence#-specific object that actually implements the bulk of the behavior required by a DLR language context, which is an architectural object of the DLR’s hosting protocol.

The C# class EssenceSharp.ClientServices.ESLanguageContext subclasses the DLR class Microsoft.Scripting.Runtime.LanguageContext, and thereby is enabled to interoperate with the DLR’s hosting protocol. But an instance of EssenceSharp.ClientServices.ESLanguageContext’s only real job is to serve as a facade over instances of the C# class EssenceSharp.Runtime.ESObjectSpace. And EssenceSharp.Runtime.ESObjectSpace is the class that reifies an Essence# object space.

So, if you are only interested in using Essence#, and have no interest in using other dynamic languages hosted on the DLR, there is no need to use a DLR language context in order to invoke the Essence# compiler and run time system from your own C#, F# or Visual Basic code. You can use instances of EssenceSharp.Runtime.ESObjectSpace directly. The only disadvantage of that would be that using other DLR-hosted languages would then require a completely different API (e.g, using an IronPython library from Essence# code requires using the DLR hosting protocol, and hence requires using a DLR language context).

The advantages of using instances of EssenceSharp.Runtime.ESObjectSpace directly would be a much richer API that is far more specific to Essence#.

You can get the object space for the current execution environment by sending the message #objectSpace to any Essence# class (even to those that represent CLR types.) And the Essence# Standard Library includes a definition for an Essence# class that represents the Essence#-specific behavior of instances of the C# class EssenceSharp.Runtime.ESObjectSpace. It’s in the namespace CLR.EssenceSharp.Runtime, and so can be found at %EssenceSharpPath%\Source\Libraries\Standard.lib\CLR\EssenceSharp\Runtime\ObjectSpace.

There are many ways that Essence# object spaces might be useful. One example would be to use one object space to host programming tools such as as browsers, inspectors and debuggers, but to have the applications on which those tools operate be in their own objects spaces. That architecture would isolate the programming tools from any misbehavior of the applications on which they operate–and vice versa.

To get additional insight into the concept of object spaces and how they might be used to good effect, the paper Virtual Smalltalk Images: Model and Applications is highly recommended.

August 12, 2014

Noury Bouraqadi - Getting laser and odometry information from Robulab robot

In this tutorial you will be able to get the laser and odometry data from a Robulab robot published into ROS topics. Setup You must ensure to meet all the requirements listed in the section Setup in Testing Robulab‘s post. Install Create a fresh PhaROS image by executing in a terminal: $ pharos create newimage… Continue reading

Noury Bouraqadi - Testing Robulab

In this tutorial we will make basic tests to assert that both the robulab robot and the laptop are configured correctly. We will consider as well configured if we can start a PhaROS node that handle robulab robot, so we can publish motion messages through rostopic pub command and make it to move. Setup Robulab… Continue reading

Clément Béra - Arithmetic, inlined and special selectors

Today we are going to discuss how message sends are dispatched in the VM and/or compiled in the image for arithmetics, inlined and special selectors.

The “myth”

In Pharo, everything is an object (even integers, compiled methods or stack activations) and objects can communicate with each other only by sending messages.

Back to reality

A programming language with only message sends as a way to communicate between objects has performance issues. These problem can be solved, as in Self, by having an efficient Just-in-time (JIT) compiler that adaptively recompiles portion of code frequently used to portion of code faster to run. This approach has however two main problems:

  • an efficient JIT compiler requires a lot of engineering work to be implemented and maintained for all the common hardware and operating systems.
  • portion of code not frequently used are slow, which can be a problem for certain applications, such as large interactive graphical applications.

In our case, for the Cog VM, a baseline JIT compiler is available and we are now working on improving the JIT capabilities to speed up code execution. However, the JIT speeds up code execution only on processors it supports (right now x86, and within a year or so it will support ARM 32 bits and x86_64 bits). For other hardwares with other processors (as supercomputers with MIPS processors) or forbidding JIT compilation (as the iPhone), Pharo and Squeak rely on the stack interpreter speed.

However, performance should be decent in any cases, including:

  • large interactive graphical application
  • programs running on processors unsupported by the JIT or on OS that forbids JIT compilers

In the first case, the virtual machine usually relies on the interpreter speed for performance. In the latter case, Pharo and Squeak rely on the interpreter based version of the VM (aka the Stack VM), which is portable and JIT-free, and therefore relies on the interpreter speed for performance. Due to these cases, the interpreter performance is critical.

A few months ago, a guy asked me why the compiled method had a specific format in Pharo. One reason is for compactness, but I believe the main reason is that the interpreter performs better with our compiled method format where everything (literals, bytecodes, metadata) is encoded in the same object instead of fetching multiple objects to run one method. In the case of a JIT compiler, the compiled method would be fetched once to be compiled to machine code, so the literals and the bytecodes may be in other objects in memory, these memory access would most probably not slow down the VM as they are rarely accessed. The machine code version of the method, present with all the information at a single memory location, is accessed in most cases.

Static optimizations

To improve the interpreter performance, the bytecode compiler and the Cog VM implement some static tricks. Most of these tricks (the ones we are talking about in this post) consists in handling specifically some message sends based on a specific selectors list. We’ll discuss in this blog post what are those selectors, what the tricks consist in for each of them and how they limit or do not limit the language features.

There are three main kinds of specific selectors:

  • arithmetic selectors: #(#+ #- # #= #= #== #~= #* #/ #\\ #@ #bitShift: #// #bitAnd: #bitOr: )
  • special selectors: #(#at: #at:put: #size #next #nextPut: #atEnd #== #class #blockCopy: #value #value: #do: #new #new: #x #y)
  • inlined selectors:#(#and: #or: #caseOf: #caseOf:otherwise: #ifFalse: #ifFalse:ifTrue: #ifTrue: #ifTrue:ifFalse: #ifNil: #ifNil:ifNotNil: #ifNotNil: #ifNotNil:ifNil: #to:by:do: #to:do: #whileFalse #whileFalse: #whileTrue #whileTrue:)

Arithmetic selectors
#(#+ #- #< #> #<= #>= #= #~= #* #/ #\\ #@ #bitShift: #// #bitAnd: #bitOr: )

The arithmetic selectors are messages leading to arithmetic operations if they are sent on integers or floating pointers.

In the StackInterpreter, there are all optimized for SmallIntegers (integers in this range: [-1073741824 ; 1073741823]) and a subset of them (#(#+ #- #< #> #<= #>= #= #~= #* #/ #@ ) are also optimized for floating pointers.

In the case where a message with the arithmetic selector is sent, and that the receiver and the argument are both SmallInteger or both Float or a SmallInteger and a Float, the virtual machine will not performed any look up and run directly the primitive operation. If the primitive operation fails or if the receiver and the argument does not match the requirements, a regular message send is sent.

In the JIT, there are 3 cases:

  • #(#+ #- #bitAnd: #bitOr:): if the JIT can infer the type of at least one of the 2 operands (the operand is a constant and a smallInteger), then a fast path is inlined in the machine code instead of the send site for the arithmetic operation. If one of the 2 operand is not necessarily a smallInteger, a fallback code, sending the message, is also generated, and the fastest path is taken at runtime depending on the types of the operands.
  • #(#< #> #<= #>= #= #~=): for #= and #~=, if the JIT can infer that one of the operand is a constant and a smallInteger, and in any case for the 4 other selectors, and if the next instruction of the comparison is a conditional jump, the JIT computes 2 paths for the jump, a fast path using cpu instruction jumpBelow:, JumpGreaterOrEqual: and co that is used at runtime if the 2 operands are SmallInteger, and a regular path that sends the message and compares the resulting value to the object true and false if one of the 2 operands at least is not a smallInteger.
  • the others: they’re not optimized by the JIT at the send site. (common message send that activates the primitive)

The Pharo/Squeak user cannot change the default behavior of the optimized instructions. One cannot for example remove the primitive pragma from SmallInteger>>#+. This constraint is not very important as it is uncommon to want to change the execution of integers and floating-pointers arithmetics.

Special selectors
#(#at: #at:put: #size #next #nextPut: #atEnd #== #class #blockCopy: #value #value: #do: #new #new: #x #y)

Firstly, Pharo does not use any more #class and #blockCopy: as special selectors. Squeak uses #class but not #blockCopy: anymore. I will not discuss about these two.

In the JIT, only #== is optimized specifically. This operation checks for identity of two objects, and is performed without any lookup. In addition, the JIT generates faster conditional jump machine code if the result of this operation is used for a conditional jump (in a similar fashion to #(#< #> #<= #>= #= #~=), but no fall back code is needed). All the other special selectors are compiled normally to machine code (message send).

#(at: at:put:)
These two operations are handled specifically in the StackInterpreter, using a specific cache to improve performance. Basically if the method lookup for #at: or #at:put: for an object ends up with a method with the primitive for #at: or #at:put:, and that this object is not a context, the receiver, its variable size, its number of fixed fields and its format are stored in the cache. Next executions of #at: and #at:put: will use the cached values instead of computing the needed data from the object header.

The cache has currently 8 entries (but Eliot will blame me if I do not precise that the number of entries is a settings that can be changed very easily).


The values in the cache are sorted by hash, the hash used being 3 bits from the receiver’s address (the last but 2 bits). If the entry is already occupied by another object, the new receiver replaces the previous object.

I don’t go into details but this cache is also correctly flushed when needed, so if I create a subclass of array and run this kind of code:
a := MyArray new: 1.
a at: 1 put: #myValue.
(a at: 1) logCr.
MyArray compile: 'at: index ^ #newResult' classified: 'access'.
(a at: 1) logCr.
Smalltalk garbageCollect.
(a at: 1) logCr.

The results on Transcript are always correct.

For Array and ByteString, the operation size is performed without lookup and directly returns the object variable size. The message is sent normally on other objects.

#(#next #nextPut: #atEnd #do: #new #new:)
These selectors are special only to reduce their memory footprint (by encoding them in the bytecode instead of the literal frame of a method). They are executed as regular message sends.

In addition to being the only special selector optimized by the JIT, #== is optimized by the StackInterpreter. It always answers a boolean, true if the 2 objects have the same address, else false. This operation is performed without any look up.

#value #value:
If the receiver of a message with one of these selectors is a closure, the VM directly activate the closure without any lookup. Else a regular message send is sent.

#x #y
If the receiver is a point, directly answers the correct instance variable of the point without any lookup. Else a regular message send is sent.


Most of these optimizations are restricted to given classes and/or primitive methods. So the restrictions for these optimizations are low: you cannot remove the primitive pragma from the optimized methods and replace it with something else or the virtual machine will not be aware.

In the case of #at: and #at:put:, there is no restriction at all (a look up is performed each time to check that a method with a primitive for #at: and #at:put: is found each time).

The only exception is #==. This selector is implemented in ProtoObject, and the corresponding primitive is performed on all objects without any lookup. Therefore, no object can override #== in the system and the implementation in ProtoObject cannot be changed.

Inlined selectors
#(#and: #or: #caseOf: #caseOf:otherwise: #ifFalse: #ifFalse:ifTrue: #ifTrue: #ifTrue:ifFalse: #ifNil: #ifNil:ifNotNil: #ifNotNil: #ifNotNil:ifNil: #to:by:do: #to:do: #whileFalse #whileFalse: #whileTrue #whileTrue:)

All the inlined messages are control flow oriented messages. In other languages, such as Javascript, some keywords are reserved for loops and conditions (if, for, foreach, …). No keywords are reserved in smalltalk. The problem of this approach, using messages instead of dedicated keywords, is that message sends are slower for the interpreter. Therefore, it was decided that control flow messages would be inlined statically to jumps in order to improve the overall performance (by a factor 2.5x-10x).

The generic idea, if I write pseudo code, is that when you write:

MyClass>>#foo: argument
| temp |
argument ifTrue: [temp := 1] ifFalse: [temp := 0].
^ temp

The code is compiled to:

MyClass>>#foo: argument
label 0:
argument jumpFalse: label 1.
temp := 1.
jumpTo: label 2.

label 1:
temp := 0.

label: 2
^ temp

And something similar is compiled for loops, with a conditional jump marking if the loop continues or if the loop exits and an unconditional back jump.

These inlined selectors have different constraints.

The loop inlined selectors have very low constraints. Basically, one cannot change the implementation of (SmallInteger>>#to:by:do: SmallInteger>>#to:do: BlockClosure>>#whileFalse BlockClosure>>#whileFalse: BlockClosure>>#whileTrue BlockClosure>>#whileTrue:). One usually does not really care, because it is very rare to want to override these selectors.

The conditions inlined selectors (#and: #or: #caseOf: #caseOf:otherwise: #ifFalse: #ifFalse:ifTrue: #ifTrue: #ifTrue:ifFalse: #ifNil: #ifNil:ifNotNil: #ifNotNil: #ifNotNil:ifNil:) constraint more the system. One cannot change the implementation of any of those selectors, but cannot either override one of these selectors in any classes of the system. This is a very annoying constraint when you are building a DSL where you want to use condition selectors or for Boolean proxies.

Some control flow messages are not inlined and are currently missing in Pharo. These two messages are SmallInteger>>#timesRepeat: and BlockClosure>>#repeat:. Right now, in the kernel, when one wants to use a loop to optimize some code, he uses #to:do: or #to:by:do:. However, these 2 selectors requires a 1-argument block. Therefore, if we would inline #timesRepeat:, which requires a 0-argument block, we would remove the overhead of pushing the block argument at each iteration, which in certain micro benchmarks is noticeable. In addition, compiling #to:do:,#to:by:do: and #timesRepeat: requires to compile a conditional jump at the beginning, to check if the loop has reached its maximum number of iteration. Compiling and inlining #repeat would allow to compile a loop without a conditional jump at the beginning, and again will be faster in certain micro benchmarks.

Inlining these two messages add very low constraints and may be interesting performance wise. I’m looking forward to someone wanting to do that. I made a first attempt once but it broke the Xtreams library (this library use timesRepeat: on non integer objects).

How to avoid constraints due to these three kinds of selectors

If one needs to avoid the constraints detailed before to experiment with something exotic, there is a simple solution. All these selectors are compiled specifically by the bytecode compiler to tip the VM on how to run them specifically. One can simply remove the compiler specifications for these selectors.

For arithmetics and special selectors, in OpalCompiler, the solution consists in editing the special selectors array, replacing the problematic selectors by nil (IRBytecodeGenerator>>#specialSelectorsArray), then reinitializing the bytecode generator (IRBytecodeGenerator initialize), and lastly recompiling the whole image (OpalCompile recompileAll). The arithmetic and/or special selectors you have removed will now be compiled as any other selector and will be executed as such in the virtual machine.

For inlined selector, this is trickier. One cannot remove them globally from the compiler, as some kernel code rely on the inlined selectors, making the image crash if you remove these optimizations.

However, one can remove the optimization in a given scope through a compilation option. There are 2 ways of setting a compilation option:

  • per method, with the pragma #<compilerOptions: #(- optionInlineIf)#>
  • per hierarchy of classes, by overriding class side the compiler method:
    MyClass class>>#compiler
    ^ super compiler options: #(- optionInlineIf)

The hierarchy of classes way is more convenient, however, it has a major drawback, it is not compatible with monticello/metacello, meaning that the methods will be loaded but miscompiled. Therefore, if you are using this compiler hack, you need to recompile the whole image after loading your code.

Here are the available options:
+ optionInlineIf
+ optionInlineIfNil
+ optionInlineAndOr
+ optionInlineWhile
+ optionInlineToDo
+ optionInlineCase


It is difficult to combine high performance and high flexibility. With a lot of engineering work, it is possible and has been proven with the Self VM and the Self language. The Self language however still misses a fast interpreter for code used infrequently, which can lead to slow large interactive graphical application.

In our case, the Cog VM chose to limit as little as possible the capability of the system while reaching high performance. However, a few constraints are left. Most of these constraints are easy to manage though, as explained in this post.