Planet Smalltalk

July 19, 2017

Hernán Morales Durand - Iliad version 0.9.6 released

Lately I have been playing with the Iliad Web Framework, and decided to publish some updates which I want to share with you:A new web site based in GitHub pages, with install instructions, screenshots and links to pre-loaded images and documentation. Updated Iliad to load in Pharo 6.0 Added an Iliad Control Panel, based in the Seaside one, which allows to create/inspect/remove web server adapters

July 18, 2017

Pharo Weekly - Keccak-256 hashing algorithm

Hi there!

I am just releasing the first version of the Keccak-256 hashing algorithm.
This  version is based on a javascript implementation:
This implementation supports  as message: bytearray and ascii and utf-8 strings.
Soon i will be adding support to the rest of the Keccak family of hashing functions, since the implementations is quite configurable, is just need to add some constructors with specific configurations and tests for this other cases of usage.
Here a onliner for building an image with the version v0.1:

Torsten Bergmann - RedditSt20

Pierce extended Sven's excellent " in 10 elegant classes" with even more. Read more.

July 16, 2017

Pierce Ng - RedditSt20

I have started a booklet on Pharo, hopefully the first of, um, more than one. It is entitled RedditSt20, on my fork and extension of Sven Van Caekenberghe's excellent " in 10 elegant classes", to cover the following in another 10 or so classes:

  • GlorpSQLite
  • Seaside-MDL
  • username/password authentication
  • logging
  • 2-factor authentication

The book is hosted on Github. Source code is on Smalltalkhub.

The book is being written using Pillar, of course. Note that the Pharo 5 version of Pillar that I downloaded from InriaCI doesn't work - the supporting makefiles aren't able to obtain the output of "./pillar introspect <something>". Use the Pharo 6 version.

July 15, 2017

Torsten Bergmann - PharoLambda

PharoLambda is a simple example and GitLab build script for deploying a minimal Pharo Smalltalk image/vm to AWS Lambda.

Smalltalk Jobs - Smalltalk Jobs – 7/14/17

  • Miami, FLGemstone Lead through E-solutions Inc
    • Required Skills:
      • 7 years hands-On experience in Gemstone Database Administration
      • Good in application support, technical solution, implementation of business requirements and enhancements
      • Good development experience with VisualWorks 7.9 Smalltalk development
      • Should have excellent knowledge on the OOPS Concepts.
      • Should work independently in SmallTalk technology
      • Good working experience in Multi-vendor environment and client facing role
  • Wilmington, MASoftware Engineer II at Rudolph Technologies, Inc.
    • Required Skills:
      • Bachelors or Master’s Degree in Software Engineering, Electrical Engineering, or comparable field
      • 2 – 5 years of experience
      • Object Oriented Programming skills
      • An interest in electronics, servo systems, optics and/or image processing
      • Knowledge of embedded system development environments such as RTEMS, VxWorks or similar environment
      • Outstanding problem solving skills
      • Serviceable written and verbal communication skills
      • A strong desire for technical challenge
    • Wanted Skills:
      • Smalltalk
      • C++
      • Python
      • Control Systems (servo, stepper, robotics) course work and preferably lab experience. Academic setting is acceptable.
      • A knowledge of optics, sensor technologies or physics.
      • Image processing using a popular image processing toolkit such as Halcon, MIL, Cognex, IPP, or OpenCV
      • A working knowledge of modern SW engineering process methodologies such as SDLC, Agile, etc.
      • Knowledge of SQL for a popular DB like PostgreSQL, Oracle, or SQL Server
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

July 14, 2017

Torsten Bergmann - Teapot: Web Programming Made Easy

Nice article on how to write a web application with Pharos Teapot framework.

Torsten Bergmann - Iceberg 0.5

A new release of Iceberg for Pharo is available to work with Git.

Torsten Bergmann - CORA

An add in for Pharos Quality assistant. Read more

Torsten Bergmann - Quuve

Debris Publishing have a new version of Quuve - an investment management platform written in Pharo and Seaside#. It is another success story and another example of "things people built with Smalltalk". They use my Twitter Bootstrap for Seaside project. Reminds me that I wanted to updated the project if my spare time permits. Full video demo is here.

July 11, 2017

Tom Koschate - The Magic is Back

In the interest of getting on with life, I succumbed and created a magic build image (actually there are four, but they’re the same).  So the build is now happening with Jenkins and I’ve moved to other matters for now.

July 08, 2017

Pharo Weekly - [ANN] Iceberg 0.5 released

Hi all,
I’m releasing 0.5 version of iceberg.
This is the changelog:
Major changes:
– works on 64bits
– adds cheery-pick
This version also includes a list of fixes, most important one is this:
– branchs are kept inline with local working copy (so if you change a branch in command line or in another image it will indicate it correctly)
But there are many others, next version will have a full list, I promise 🙂
Now, to actually use it you will need to accomplish several steps (until I update the image)
1) You need to download the new stable VM for P7 (it does not matters if you are on P6).
wget -O- | bash
wget -O- | bash #If you are on linux
wget -O- | bash
wget -O- | bash #If you are on linux
then, to update, execute this (sorry, this is like that because we have still an older Metacello version):
#(‘Iceberg-UI’ ‘Iceberg-Plugin’ ‘Iceberg-Metacello-Integration’ ‘Iceberg-Libgit’ ‘Iceberg’ ‘BaselineOfIceberg’ ‘LibGit-Core’ ‘BaselineOfLibGit’)
do: [ :each | each asPackage removeFromSystem ].
Metacello new
  baseline: ‘Iceberg’;
  repository: ‘github://pharo-vcs/iceberg‘;
There will be a version of 6.1 that provide Iceberg 0.5 but it requires different version of C plugins hence a different VM.

July 07, 2017

Pharo Weekly - News from PR battle front

I prepared a script that should help you with the reviews of the pull requests on Pharo 7. We will later convert it into a more fancy tool. It does next steps:
– sets the basic information: pull request number, path to your pharo repository clone, name of your fork.
– registers the repository into Iceberg and sets pull and push target remotes
– switches branch to a particular commit from which the Pharo image was bootstrapped
– registers the repository into into Monticello packages to be able to do correct diffs
– gets basic information about the pull request from GitHub (original repository, branch name)
– registers the PR original repository into remotes if needed and fetches information from it
– creates a new local branch to merge the PR
– merges the PR branch
– displays a simple tool that shows differences in done in this merged branch
pullRequest := 73.
target := ‘/path/pharo’ asFileReference.
myForkName := ‘myFork’.
repository := IceRepositoryCreator new location: target; subdirectory:’src’; createRepository.
repository register.
fork := repository remotes detect: [ :remote | remote remoteName = myForkName ].
repository pushRemote: fork.
repository pullRemote: repository origin.
repository checkoutBranch: (SystemVersion current commitHash).
fileTreeRepository := (MCFileTreeRepository new directory: target / #src; yourself).
repositoryGroup := MCRepositoryGroup withRepositories: { fileTreeRepository. MCCacheRepository uniqueInstance. }.
MCWorkingCopy allManagers
select: [ :wc | (wc repositoryGroup repositories reject: [ :repo | repo isCache ]) isEmpty ]
thenDo: [ :wc | wc repositoryGroup: repositoryGroup ].
stonString := (ZnEasy get: ‘‘, pullRequest asString) contents.
head := (STONJSON fromString: stonString) at: ‘head’.
sshUrl := (head at: #repo) at: ‘ssh_url’.
branchName := head at: #ref.
user := (sshUrl withoutPrefix: ‘’) withoutSuffix: ‘/pharo.git’.
fork := repository remotes detect: [ :remote | remote remoteName = user ] ifNone: [
| newFork |
newFork := (IceRemote name: user url: (‘{1}/pharo.git’ format: {user})).
repository addRemote: newFork.
newFork ].
repository fetchFrom: fork.
prMergedBranchName := ‘pr’, pullRequest asString.
repository createBranch: prMergedBranchName.
repository checkoutBranch: prMergedBranchName.
commit := repository revparse: user, ‘/’, branchName.
bootstrapCommit := repository revparse: (SystemVersion current commitHash).
[ repository backend merge: commit id ]
on: IceMergeAborted
do: [ :error | repository mergeConflictsWith: commit   ] .
headCommit := repository revparse: ‘HEAD’.
browser := GLMTabulator new.
browser row: [:row | row column: #commits span: 2; column: #changes span: 3]; row: #diff.
browser transmit to: #commits.
browser transmit to: #changes; from: #commits; andShow: [ :a :commitInfo |
(IceDiffChangeTreeBuilder new entity: commitInfo; diff: (IceDiff from: commitInfo to: bootstrapCommit); buildOn: a) title: ‘Changes’. ].
browser transmit from: #commits; from: #changes;  to: #diff; andShow: [ :a |
a diff title: ‘Left: working copy / Right: incoming updates’; display: [ :commitInfo :change |
{ change theirVersion ifNil: ”. change myVersion ifNil: ”. }]].
browser openOn: {headCommit}.
The merge operation only changes the Git working copy, no code is loaded into the image. If you want to test the PR code, currently you need to open Iceberg and reload all packages in the Pharo repository (Packages tab, Reload all)
Expect troubles 🙂
— Pavel

July 06, 2017

Pharo Weekly - Iceberg 0.5.1 with Pull Request review tool

I just released Iceberg version 0.5.1 with a Pull Request tool Guille and I worked on since yesterday.
It allows you to list open Pull Requests (by right click on a repo, GitHub/Review pull requests… option):
And then if you doubleclick on one (or select it with right button), you will see this:
it allows you to see changes and
– merge changes into your image (in case you want to see more in details the code, run tests, etc.)
– accept a pull request
– reject a pull request
no, it does not shows (at least *yet*) comments, and it does not allows you to add comments, reviews, etc.
this could be done, but not time to implement it now, so for now this has to be enough.
Again, this can be loaded in a 6.0 image by executing this script:
#(‘Iceberg-UI’ ‘Iceberg-Plugin’ ‘Iceberg-Metacello-Integration’ ‘Iceberg-Libgit’ ‘Iceberg’ ‘BaselineOfIceberg’ ‘LibGit-Core’ ‘BaselineOfLibGit’) do: [ :each | each asPackage removeFromSystem ].
Metacello new
  baseline: ‘Iceberg’;
  repository: ‘github://pharo-vcs/iceberg:v0.5.1‘;
(and you still need to have the vm that is meant for Pharo7)
This tools are open for you to use on your projects… and to improve them, I accept pull requests on pharo-vcs/iceberg.

Pharo Weekly - SmartTests call for users

Hi everyone,

I’m working on a new plugin for Nautilus/Calypso that will help us with the test.

The goal is to provide us the selection of tests we should run after a modification.

It would be great if you accept to try it or want to use it.

Currently, by installing the plugin, when you select a method in Calypso or Nautilus, you will see a new critique that will offer you to run the test relative to the method you’ve selected (or class ^^ )

Another goal is to calculate the efficient of this new tool and how we will use it. It’s why I’ve also developed a plugin that will record  how we will use the plugin.

If you accept to help me. Please tell me ( so i will be able to estimate how many data I will obtain ).

Now the commands.

If you’d like to help me. The command is simple :

Metacello new

    smalltalkhubUser: ‘badetitou’ project: ‘TestsUsageAnalyser-CoraExtends’;

    configuration: ‘TestsUsageAnalyser_CORAExtends’;

    version: #stable;



If you’d like to use the plugin or just want to try it. It’s also simple

Metacello new

    smalltalkhubUser: ‘badetitou’ project: ‘CORA’;

    configuration: ‘CORA’;

    version: #stable;



For the version with Calypso ( because it’s cool ).

Metacello new

    smalltalkhubUser: ‘badetitou’ project: ‘CORA’;

    configuration: ‘CORA_Calypso’;

    version: #stable;



So, if you want to help me and use calypso you should run the first and the third command.

Options for the plugin are available in the settings of Pharo in the group : ‘TestRegression’. So you can extend the plugin with your own logic of testing strategy. 

To disable the spy (first package) in the option please uncheck ‘Test Usage Analyser’

I’m going to write a blog post that will explain all the plugin and how to use it.

If you find bugs ( But there are no bugs 😉 ) please tell me and i will fix it as soon as possible.

If you’d like that I add a feature, tell me too.

Thanks a lot for your help.

Benoît Verhaeghe

Pharo Weekly - Community button :)


I added a small utility called “Community” to the catalog that allows you to quickly
access/browse the most prominent Pharo pages (Homepage, Discord, Mailinglist Archive, CI Server,
Books page, Association, Consortium, STHub) right from the world menu. Also in Spotter.

Load Community from Pharo catalog. As the menu entries are mirrored also in Spotter you
can easily open Spotter, key in “Discord” hit enter and the local web browser should
open the Pharo chat.

Attached is a screenshot.

Maybe this is useful for others too.

Have fun


July 05, 2017

Stefan Marr - A 10 Year Journey, Stop 5: Growing the SOM Family

SOM, the Simple Object Machine, has been a steady companion for much of my research. As mentioned earlier, all this work on virtual machines started for me with CSOM, a C-based implementation of a simple Smalltalk language. From the beginning, SOM was meant as a vehicle for teaching language implementation techniques as well as doing research on related topics. As such, it is keep simple. The interpreter implementations do not aim to be fast. Instead, concepts are supposed to be expressed explicitly and consistent, so that the code base is accessible for students. Similarly, the language is kept simple and includes dynamic typing, objects, classes, closures, and non-local returns. With these features the core of typical object-oriented languages is easily covered. One might wonder about exceptions, but their dynamic semantics very similar to non-local returns and thus covered, too.

Originally, SOM was implemented in Java. Later, CSOM, SOM++ (a C++-based implementation), AweSOM (a Smalltalk-based implementation) joined the family. Some of this history is documented on the old project pages at the HPI, where much of this work was done.

When I picked up maintaining the SOM family for my own purposes, I added PySOM, a Python-based implementation, and JsSOM implemented in JavaScript. As part of the work on building a fast language implementation, I also added TruffleSOM, a SOM implementation using the Truffle framework on top of the JVM. As well as RPySOM, an RPython-based bytecode interpreter for SOM, and RTruffleSOM, a Truffle-like AST interpreter implemented in RPython.

A Fast SOM

For TruffleSOM and RTruffleSOM, the focus was on performance. This means, the clarity and simplicity got somewhat compromised. In the code base, that’s usually visible in form of concepts being implemented multiple times to cover different use cases or special cases. Otherwise, the language features haven’t really changed. The only thing that got extended is the set of basic operations implemented for SOM, which we call primitives, i.e., builtin operations such as basic mathematical operations, bit operations, and similar things that either cannot be expressed in the language, or are hard to express efficiently.

The main reason to extend SOM’s set of primitives was to support a wide set of benchmarks. With the Are We Fast Yet project, I started a project to compare the performance of a common set of object-oriented languages features across different programming languages. One of the main goals was for me to be able to understand how fast TruffleSOM and RTruffleSOM are, for instance compared to state-of-the-art Java or JavaScript VMs.

Well, let’s have a look at the results:

Performance Overview of SOM Implementations Performance Overview of SOM Implementations

The figure shows the performance of various SOM implementations relative to Java 1.8, i.e., the HotSpot C2 compiler. To be specific, it shows the peak performance discounting warmup and compilation cost. As another reference point for a common dynamic language, I also included Node.js 8.1 as a JavaScript VM.

As the numbers show, TruffleSOM and RTruffleSOM reach about the same performance on the used benchmarks. Compared to Java, they all are about 2-4x slower. Looking at the results for Node.js, I would argue that I managed to reach the performance of state-of-the-art dynamic language VMs with my little interpreters.

The simple SOM implementations are much slower however. SOM and SOM++ are about 500x slower. That is quite a bit slower than the performance reached by the Java interpreter, which is only about 10-50x slower than just-in-time compiled and highly optimized Java. The slowness of SOM and SOM++ are very much expected because of their focus on teaching. Beside many of the little design choices that are not optimal for performance, there is also the used bytecode set, which is designed to be fairly minimal and thus causes a high overhead compared to the optimized bytecode sets used by Java, Ruby, or Smalltalk 80.

Making SOM++ Fast with Eclipse OMR

As shown with TruffleSOM and RTruffleSOM, meta-compilation approaches are one possible way to gain state-of-the-art performance. Another promising approach is the reuse of existing VM technology in the form of components to improve existing systems. One of the most interesting systems in that field is currently Eclipse OMR. The goal of this project, which is currently driven by IBM, is to enable languages such as Ruby or Python to use the technology behind IBM’s J9 Java Virtual Machine. At some point, they decided to pick up SOM++ as a show case for their technology. They first integrated their garbage collector, and later added some basic support for their JIT compiler. My understanding is that it currently compiles each bytecode of a SOM method into the J9 IR using the JitBuilder project, allowing a little bit of inlining, but not doing much optimizations. And the result is a 4-5x speedup over the basic SOM++ interpreter. For someone implementing languages, such a speedup is great, and not to snub at, even if we start from a super slow system. But as a result, you reach the performance of optimized interpreters, while still maintaining a minimal bytecode set and the general simplicity of the system. Of course, minus the complexity of the JIT compiler itself.

To reach the same performance as TruffleSOM and RTruffleSOM, there is quite a bit more work to be done. I’d guess, SOM++ OMR would need more profiling information to guide the JIT compiler. And, it probably will also need a few other tricks like an efficient object model and stack representation to really achieve the same speed. But anyway, to me it is super cool to see someone else picking up SOM for their purposes and built something new with it 🙂.

Other Uses of SOM

And while we are at it, over the years, some other projects spawned off from SOM. There was NXTalk for the Lego Mindstorm system. My own ActorSOM++, which implemented a simple Actor language as part of SOM. And more recently, SOMns, a Newspeak implementation derived from TruffleSOM. You might have noticed, it’s actually a bit faster than TruffleSOM itself :) And, it supports all kind of concurrency models, from actors over CSP, STM, fork/join, to classic threads and locks.

Similar to SOM++ OMR, the Mu Micro VM project picked up a SOM implementation to showcase their own technology. Specifically, they used RPySOM, an RPython-based bytecode interpreter for their experiments.

Guido Chari forked TruffleSOM to built TruffleMate and experiment with making really all parts of a language runtime reflectively accessible, while maintaining excellent performance.

And last, but not least, Richard Roberts is currently working on a Grace implementation on top SOMns.

So there are quite a few things happening around SOM and the various offspring. I hope, the next 10 years are going to be as much fun as the last.

And with that, I’ll end this series of blog posts. If you’re interested to learn more, check out the community section on the SOM homepage, ask me on Twitter @smarr, or sent me an email.

Pharo Weekly - News from the git battle front :)


This is my weekly ChangeLog, from 26 June 2017 to 2 July 2017.
You can see it in a better format by going here:


29 June 2017:

*    … and I spent some time figuring out why windows users have a persistent error about access files.

We should always remember the windows path limitations (256) 🙂

Now, the workaround for this is to execute this in command line:

git config –system core.longpaths true

but of course, this is just a workaround because people using Iceberg could not hava installed a
command line git client. I will need to check in the future this 😦

*    I spent some time trying to get [Iceberg version dev-0.5](
to load properly (yesterday’s script is not working).

The reason is that +Metacello+ fails to upgrade packages from a baseline. And there is no way (at least
that I found) to force the upload.

So this is the updated script to test dev-0.5:

1. Download latest vm and image.

wget -O- | bash
wget -O- | bash # for linux systems

2. Execute this on your image

#(‘Iceberg-UI’ ‘Iceberg-Plugin’ ‘Iceberg-Metacello-Integration’ ‘Iceberg-Libgit’ ‘Iceberg’ ‘BaselineOfIceberg’
‘LibGit-Core’ ‘BaselineOfLibGit’)
do: [ :each | each asPackage removeFromSystem ].

Metacello new
baseline: ‘Iceberg’;
repository: ‘github://pharo-vcs/iceberg:dev-0.5’;

This should actually remove old iceberg version then install new one.

28 June 2017:

*    Ok, I get the VM to compile correctly with the new libgit2 version, and now +latest vm+ comes with
+libgit 0.25.1+ for both 32 and 64 bits versions.

I also made some minor fixes to +iceberg dev-0.5+ and it should be ready to test and release. This
version incorporates some important changes that will allow us to work with it to make changes to
Pharo itself (and that will be noticed on big projects):

* it has cherry-pick.
* it speeds up sincronization by introducing more precise comparisons instead making a “full scan”
* it keeps in sync branch on disk and branch on iceberg (before it was keeping them separately and it was very confusing)

To test it, you can execute:

wget -O- | bash
wget -O- | bash # on linux systems

then you will need to load version +dev-0.5+ :

Metacello new
baseline: ‘Iceberg’;
repository: ‘github://pharo-vcs/iceberg:dev-0.5’;
“And you will need to execute this… I will need to update the baseline with this,
now that I think :)”
LGitExternalStructure allSubclassesDo: #compileFields.

*    I finished ensuring [iceberg]( will work on 64 bits.

Now, I needed to make some fixes for UFFI, which I put in [case: 20198]( (it is imperative to include
this to be able to backport 0.5 into Pharo 6.0). Also, I will need to promote a new VM as stable.

I’m not sure I want to backport this into Pharo 6. I know I promised but complications are… more than
benefits, I think: You will need a new VM. People will not know that and they will download a P6 image
with the older VM and this will cause problems.

Maybe is better to move all this to P7?


Pharo Weekly - PharoCloud Ephemeric Cloud updates


Some updates on Ephemeric Cloud development:

1) Ephemeric cloud moved from OVH to Digital Ocean. In theory this opens an
opportunity to run instances in different datacenter regions of DO. I think
about adding a node in Frankfurt. Any thoughts?

2) Added support for Pharo 6 (32 bits only for now). Pharo 6 is particularly
slow in read-only environments, so… now environment is writable. All
changes are written to memory and reset on restart. Kind of works, Pharo 6
images are starting in 1-2 seconds (after initial load which may take for
like 10 seconds). I migrated all my apps to the Pharo 6 and seems to work
fine now.

3) Added custom ports support. Now it is possible to expose any additional
ports needed for your Image. If you set an integer array to property
“customPorts” system will expose and forward to these ports on start of the
instance. Public addresses of the exposed ports are available on
“mappedPorts”. Note that public addresses are changed every start (and not
accessible on stopped Images).

4) As result now you can remotely connect, debug and control Images running
at PharoCloud using PharmIDE:
This is so awesome! Thank you to Denis for his great work on PharmIDE:
Please try it and tell me if it works for you.

5) Completely new http gate for ephemerics. Instead of golang version now it
is run on nginx + lua. This allows virtually all features to work. Like

6) Optimized Image upload should give pretty good boost in sending new
Images to cloud.

Just a reminder: to get free access to the cloud you can use your Pharo
Association account or register at pharocloud main site. You can log in

Docs can be found here:

Looking forward for feedback.


July 04, 2017

Torsten Bergmann - P3, a modern, lean and mean PostgreSQL client for Pharo

P3 is a modern, lean and mean PostgreSQL client for Pharo provided by Sven. P3Client uses frontend/backend protocol 3.0 (PostgreSQL version 7.4 [2003] and later). Read the announcement here and check the source code here.

Craig Latta - Browser-to-browser websocket tunnels with Caffeine and livecoded NodeJS


In our previous look at livecoding NodeJS from Caffeine, we implemented tweetcoding. Now let’s try another exercise, creating WebSockets that tunnel between web browsers. This gives us a very simple version of peer-to-peer networking, similar to WebRTC.

Once again we’ll start with Caffeine running in a web browser, and a NodeJS server running the node-livecode package. Our approach will be to use the NodeJS server as a relay. Web browsers that want to establish a publicly-available server can register there, and browser that want to use such a server can connect there. We’ll implement the following node-livecode instructions:

  • initialize, to initialize the structures we’ll need for the other instructions
  • create server credential, which creates a credential that a server browser can use to register a WebSocket as a server
  • install server, which registers a WebSocket as a server
  • connect to server, which a client browser can use to connect to a registered server
  • forward to client, which forwards data from a server to a client
  • forward to server, which forwards data from a client to a server

In Smalltalk, we’ll make a subclass of NodeJSLivecodingClient called NodeJSTunnelingClient, and give it an overriding implementation of configureServerAt:withCredential:, for injecting new instructions into our NodeJS server:

configureServerAt: url withCredential: credential
  "Add JavaScript functions as protocol instructions to the
node-livecoding server at url, using the given credential."

  ^(super configureServerAt: url withCredential: credential)
    addInstruction: 'initialize'
    from: '
      function () {
        global.servers = []
        global.clients = []
        global.serverCredentials = []
        global.delimiter = ''', Delimiter, '''
        return ''initialized tunnel relay''}';
    invoke: 'initialize';
    addInstruction: 'create server credential'
    from: '
      function () {
        var credential = Math.floor(Math.random() * 10000)
        this.send((serverCredentials.length - 1) + '' '' + credential)
        return ''created server credential''}';
    addInstruction: 'install server'
    from: '
      function (serverID, credential) {
        if (serverCredentials[serverID] == credential) {
          servers[serverID] = this
          return ''installed server''}
      else {
        return ''bad credential''}}';
    addInstruction: 'connect to server'
    from: '
      function (serverID, port, req) {
        if (servers[serverID]) {
          servers[serverID].send(''connected:atPort:for: '' + (clients.length - 1) + delimiter + port + delimiter + req.connection.remoteAddress.toString())
          return ''connected client''}
        else {
          return ''server not connected''}}';
    addInstruction: 'forward to client'
    from: '
      function (channel, data) {
        if (clients[channel]) {
          clients[channel].send(''from:data: '' + servers.indexOf(this) + delimiter + data)
          return ''sent data to client''}
        else {
          return ''no such client channel''}}';
    addInstruction: 'forward to server'
    from: '
      function (channel, data) {
        if (servers[channel]) {
          servers[channel].send(''from:data: '' + clients.indexOf(this) + delimiter + data)
          return (''sent data to server'')}
        else {
          return ''no such server channel''}}'

We’ll send that message immediately, configuring our NodeJS server:

  configureServerAt: 'wss://yourserver:8087'
  withCredential: 'shared secret';

On the NodeJS console, we see the following messages:

server: received command 'add instruction'
server: adding instruction 'initialize'
server: received command 'initialize'
server: evaluating added instruction 'initialize'
server: initialized tunnel relay
server: received command 'add instruction'
server: adding instruction 'create server credential'
server: received command 'add instruction'
server: adding instruction 'install server'
server: received command 'add instruction'
server: adding instruction 'connect to server'
server: received command 'add instruction'
server: adding instruction 'forward to client'
server: received command 'add instruction'
server: adding instruction 'forward to server'

Now our NodeJS server is a tunneling relay, and we can connect servers and clients through it. We’ll make a new ForwardingWebSocket class hierarchy:


Instances of ForwardingClientWebSocket and ForwardingServerWebSocket use a NodeJSTunnelingClient to invoke our tunneling instructions.

We create a new ForwardingServerWebSocket with newThrough:, which requests new server credentials from the tunneling relay, and uses them to install a new server. Another new class, PeerToPeerWebSocket, provides the public message interface for the framework. There are two instantiation messages:

  • toPort:atServerWithID:throughURL: creates an outgoing client that uses a ForwardingClientWebSocket to connect to a server and exchange data
  • throughChannel:of: creates an incoming client that uses a ForwardingServerWebSocket to exchange data with a remote outgoing client.

Incoming clients are used by ForwardingServerWebSockets to represent their incoming connections. Each ForwardingServerWebSocket can provide services over a range of ports, as a normal IP server would. To connect, a client needs the websocket URL of the tunneling relay, a port, and the server ID assigned by the relay.

As usual, you can examine and try out this code by clearing your browser’s caches for (including IndexedDB), and visiting With browsers able to communicate directly, there are many interesting things we can build, including games, chat applications, and team development tools. What would you like to build?

July 03, 2017

Torsten Bergmann - Moldable Debugger in Pharo

Pharo has a moldable debugger built in. So you can even customize the debugging experience for own needs. Look at this debugger for formal specs for an example:


Torsten Bergmann - VerStix

A new project to connect Pharo images to various services/languages in a reactive way via Vert.x. Vert.x TCP EventBus Bridge client for Pharo Smalltalk.

You can interact with various vert.x components (Web, Auth, DB, MQ, etc) via EventBus. Code is on GitHub

July 02, 2017

Marten Feldtmann - PUM, Topaz – calculated attributes

Due to needs in our application I had to add (for the domain class hierarchy) the possibility to define calculated base attributes and calculated associations.

That means, that the attribute does not held any value (strings, numbers, … or sets of domain objects)- but the values will be calculated with the help of a compiled method block – with only one parameter: the domain object holding the attribute.

This change only matters the Topaz generator – the clients are not changed.

Filed under: Smalltalk Tagged: Gemstone/S, PUM

July 01, 2017

Benoit St-Jean - PharoDays 2017

Une tonne de nouveaux vidéos sur la conférence PharoDays 2017 ici!

Avec Pharo, rêver n’est plus un luxe!


Classé dans:Pharo, Smalltalk Tagged: 2017, Pharo, PharoDAYS, Smalltalk

June 30, 2017

Cincom Smalltalk - Georg Heeg eK Celebrates 30th Anniversary!

We are honored to announce that our Premier Partner, Georg Heeg eK, will soon be celebrating its 30th anniversary. On July 7, 1987, Georg Heeg eK was officially founded and […]

The post Georg Heeg eK Celebrates 30th Anniversary! appeared first on Cincom Smalltalk.

Craig Latta - retrofitting Squeak Morphic for the web

Google ChromeScreenSnapz022

Last time, we explored a way to improve SqueakJS UI responsiveness by replacing Squeak Morphic entirely, with morphic.js. Now let’s look at a technique that reuses all the Squeak Morphic code we already have.

many worlds, many canvases

Traditionally, Squeak Morphic has a single “world” where morphs draw themselves. To be a coherent GUI, Morphic must provide all the top-level effects we’ve come to expect, like dragging windows and redrawing them in their new positions, and redrawing occluded windows when they are brought to the top. Today, this comes at an acceptable but noticeable cost. Until WebAssembly changes the equation again, we want to do all we can to shift UI work from Squeak Morphic to the HTML5 environment hosting it. This will also make the experience of using SqueakJS components more consistent with that of the other elements on the page.

Just as we created an HTML5 canvas for morphic.js to use in the last post, we can do so for individual morphs. This means we’ll need a new Canvas subclass, called HTML5FormCanvas:


An HTML5FormCanvas draws onto a Form, as instances of its parent class do, but instead of flushing damage rectangle from the Form onto the Display, it flushes them to an HTML5 canvas. This is enabled by a primitive I added to the SqueakJS virtual machine, which reuses the normal canvas drawing code path.

Accompanying HTML5FormCanvas are new subclasses of PasteUpMorph and WorldState:



HTML5PasteUpMorph provides a message interface for other Smalltalk objects to create HTML5 worlds, and access the HMTL5FormCanvas of each world and the underlying HTML5 canvas DOM element. An HTML5WorldState works on behalf of an HTML5PasteUpMorph, to establish event handlers for the HTML5 canvas (such as for keyboard and mouse events).

HTML5 Morphic in action

You don’t need to know all of that just to create an HTML5 Morphic world. You only need to know about HTML5PasteUpMorph. In particular, (HTML5PasteUpMorph class)>>newWorld. All of the traditional Squeak Morphic tools can use HTML5PasteUpMorph as a drop-in replacement for the usual PasteUpMorph class.

There are two examples of single-window Morphic worlds in the current Caffeine release, for a workspace and classes browser. I consider these two tools to be the “hello world” exercise for UI framework experimentation, since you can use them to implement all the other tools.

We get an immediate benefit from the web browser handling window movement and clipping for us, with opaque window moves rendering at 60+ frames per second. We can also interleave Squeak Morphic windows with other DOM elements on the page, which enables a more natural workflow when creating hybrid webpages. We can also style our Squeak Morphic windows with CSS, as we would any other DOM element, since as far as the web browser is concerned they are just HTML5 canvases. This makes effects like the rounded corners and window buttons trays that Caffeine uses very easy.

Now, we have flexible access to the traditional Morphic tools while we progress with adapting them to new worlds like morphic.js. What shall we build next?