Planet Smalltalk

October 22, 2016

Stefan Marr - Language Research with Truffle at the SPLASH’16 Conference

Next weekend starts one of the major conferences of the programming languages research community. The conference hosts many events including our Meta’16 workshop on Metaprogramming, SPLASH-I with research and industry talks, the Dynamic Languages Symposium, and the OOPSLA research track.

This year, the overall program includes 9 talks on Truffle and Graal-related topics. This includes various topics including optimizing high-level metaprogramming, low-level machine code, benchmarking, parallel programming. I posted a full list including abstracts here: Truffle and Graal Presentations @SPLASH’16. Below is an overview and links to the talks:

Sunday, Oct. 30th

AST Specialisation and Partial Evaluation for Easy High-Performance Metaprogramming (PDF)
Chris Seaton, Oracle Labs
Meta’16 workshop 11:30-12:00

Towards Advanced Debugging Support for Actor Languages: Studying Concurrency Bugs in Actor-based Programs (PDF)
Carmen Torres Lopez, Stefan Marr, Hanspeter Moessenboeck, Elisa Gonzalez Boix
Agere’16 workshop 14:10-14:30

Monday, Oct. 31st

Bringing Low-Level Languages to the JVM: Efficient Execution of LLVM IR on Truffle (PDF)
Manuel Rigger, Matthias Grimmer, Christian Wimmer, Thomas Würthinger, Hanspeter Mössenböck
VMIL’16 workshop 15:40-16:05

Tuesday, Nov. 1st

Building Efficient and Highly Run-time Adaptable Virtual Machines (PDF)
Guido Chari, Diego Garbervetsky, Stefan Marr
DLS 13:55-14:20

Optimizing R Language Execution via Aggressive Speculation
Lukas Stadler, Adam Welc, Christian Humer, Mick Jordan
DLS 14:45-15:10

Cross-Language Compiler Benchmarking—Are We Fast Yet? (PDF)
Stefan Marr, Benoit Daloze, Hanspeter Mössenböck
DLS 16:30-16:55

Thursday, Nov. 3rd

GEMs: Shared-memory Parallel Programming for Node.js (DOI)
Daniele Bonetta, Luca Salucci, Stefan Marr, Walter Binder
OOPSLA conference 11:20-11:45

Efficient and Thread-Safe Objects for Dynamically-Typed Languages (PDF)
Benoit Daloze, Stefan Marr, Daniele Bonetta, Hanspeter Mössenböck
OOPSLA conference 13:30-13:55

Truffle and Graal: Fast Programming Languages With Modest Effort
Chris Seaton, Oracle Labs
SPLASH-I 13:30-14:20

October 21, 2016

ESUG news - Photos from ESUG 2016

Here are some photos from this years ESUG 2016 in Prag:

October 20, 2016

Pharo News - [ANN] Sparta v1.1

<p>Aliaksei Syrel writes:</p> <p>I am happy to announce the release of Sparta v1.1 for Pharo 6.</p> <p><a href=""></a></p> <p>It can be bootstrapped with the following script:</p> <pre>Metacello new baseline: 'Sparta'; repository: 'github://syrel/sparta:v1.1/src'; load: #file:core</pre> <p>Examples are on class side of: MozExamples, MozTextExamples</p> <p>(for linux users: if you use 32bit pharo on 64bit linux, sparta will not work, since 32bit plugin depends on 32bit GTK which conflicts with 64bit GTK. Either use 32bit linux or 64bit pharo. I tested sparta with 64bit vm on mac and linux - it works, but some features fail because FFI is not ready)</p> <p>Release of v1.1 is focused on hardware acceleration, windows support and text rendering.</p> <p>What is new:</p> <ul><li> Default backends on all platforms changed from software to hardware accelerated.</li><li> Now also works on Windows! Default backend is Direct2D for drawings and DirectWrite for text. On multi-gpu machines per-app-default setting is respected. In case of Nvidia it can be changed in nvidia control panel. Sparta is x2 faster on discrete gpu than on integrated one.</li><li> Added initial text support, for instance rendering and high precision measurement.</li><li> Per-platform settings system is now image based. Allows to enable/disable hardware acceleration, change default backends, change font-mappings tables.</li></ul> <p>Some text examples:</p> <p>(rendering) <figure><img src="/web/files/posts/sparta-text-haiku.png"></img><figcaption></figcaption></figure> <figure><img src="/web/files/posts/sparta-text-measurement.png"></img><figcaption></figcaption></figure></p>

Andres Valloud - The car reliably drives itself, except it doesn't

On June 30th, 2016, the National Highway Traffic Safety Administration (NHTSB) opened an investigation on Tesla Motors due to a fatal crash involving a Tesla Model S.  The issue is the car's Autopilot software was enabled and driving the car.  The autonomous system missed a truck crossing the highway perpendicular to the Tesla's direction of travel.  As a result, the Tesla's passed under the truck's trailer.  Presumably, the parts of the trailer that went through the Tesla's windshield resulted in the driver's death.

Tesla's blog post on the incident states, in part:

This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.

These averages are so broad the comparisons are meaningless.

For example, Tesla quotes (without citation) that among all vehicles in the US, there is a fatality every 94 million miles.  The "all vehicles" category presumably includes buses and farm equipment.  Does comparing the fatality rates of mopeds and Tesla cars make sense?  Furthermore, Tesla's Model S has a suggested retail price of $70,000 USD, which is hardly representative of average vehicles.  What is the fatality rate for vehicles comparable to Tesla's Model S?  Are the comparable vehicles similar enough to reach meaningful conclusions?  What do different vehicle classes look like?  How are the associated driver populations for each category best described?  Are populations associated with higher fatality rates likely to adopt autonomous driving systems in the first place?

Recall the fatality per mile traveled averages include the effects of negligent drivers.  Under this light, Tesla's own numbers are questionable at face value.  Specifically, according to the CDC, 31% of the US driving related fatalities involve alcohol impairment.  Thus, sober human drivers cause one fatality per 136 million miles traveled, instead of the 94 million miles quoted.  The CDC data also indicates a further 16% crashes involve drugs other than alcohol, but the number of resulting fatalities could not be clarified for those collisions with available data.

Tesla's post compares fatality per mile averages.  However, Tesla's total of 130 million miles traveled pales in comparison to the number of miles driven to arrive at the CDC averages.  It looks like the sample sizes differ by several orders of magnitude.  Is Tesla's sample size really enough to reach accurate conclusions?  Tesla seems to think so, thus an assessment is fair game.  The above numbers show the Autopilot software compares favorably to possibly driving under the influence of intoxicants.  Moreover, Autopilot also compares as roughly equivalent to an average driver: likely speeding, perhaps reckless, in some cases impaired by drugs other than alcohol.  Altogether, Autopilot is not a good driver.

As a side note, when Tesla invokes fatality per miles traveled averages, the implication is that Tesla's Autopilot is better than average and hence good.  But most drivers incorrectly believe themselves better than average.  It follows the average driver underestimates what it takes to be a good driver.  Tesla's statement could be setting up average readers to deceive themselves by tacitly appealing to their illusory superiority.

But back to the story.  What are the self-driving car performance expectations?  This June 30th CNN article states, in part:

Experts say self-driving systems could improve safety and reduce the 1.25 million motor vehicles deaths on global roads every year. Many automakers have circled 2020 as a year when self-driving systems will be released on public roads.

The year 2020 is basically just around the corner.  Today, drivers feel compelled to forcibly take control from autonomous driving systems alarmingly often.  This LA Times article from January 2016 states, in part:

The California Department of Motor Vehicles released the first reports from seven automakers working on autonomous vehicle prototypes that describe the number of "disengagements" from self-driving mode from fall 2014 through November.  This is defined by the DMV as when a "failure of the autonomous technology is detected" or when the human driver needs to take manual control for safety reasons.

Google Inc. reported 341 significant disengagements, 272 of which were related to failure of the car technology and 69 instances of human test drivers choosing to take control. On average, Google experienced one disengagement per 1,244 miles. [total 424,331 miles traveled]

The average driver response time was 0.84 of a second, it said. [Who is "it"?  The DMV?  Google?]

Most of the cases in which drivers voluntarily took control of the car involved "perception discrepancy," or when the car's sensors did not correctly sense an object such as overhanging tree branches, Google said. 

Bosch recorded 625 disengagements, or about one per mile, and Delphi Automotive totaled 405, or one per 41 miles.  [Delphi total 16,662 miles traveled]

Nissan North America Inc. reported 106 disengagements, which breaks down to one per 14 miles; Mercedes-Benz Research and Development North America Inc. listed 1,031, or one every two miles; and Volkswagen Group of America Inc. totaled 260, or one every 57 miles.  [VW total 14,945 miles traveled]

Tesla Motors Inc. said it did not have any disengagements from autonomous mode. It did not report how many miles its self-driving cars had traveled. 

Both the information and the questions required for good understanding are missing.  Would you be comfortable being driven by someone who misses branches, or just fails to drive at all, as frequently as once a mile?  Are the driving conditions for those driven miles reported by Google and others realistic?  What proportion were driven in snow, ice, heavy rain, fog, or smoke?  Did autonomous driving systems encounter geese, ducks, or deer on the road?  How do those systems handle emergency situations?

Suppose you will never let beta software drive you around.  What happens when you are affected by someone who does?  Back to the CNN article,

Experts have cautioned since Tesla unveiled autopilot in October that the nature of the system could lead to unsafe situations as drivers may not be ready to safely retake the wheel.

If Tesla's autopilot determines it can no longer safely drive, a chime and visual alert signals to drivers they should resume operation of the car. A recent Stanford study found that a two-second warning -- which exceeds the time Tesla drivers are sure to receive -- was not enough time to count on a driver to safely retake control of a vehicle that had been driving autonomously.

Given this expert assessment, what does the lack of Tesla disengagements in California DMV's report mean?  That Tesla's software is just better?  That the average Tesla driver is less engaged?  Does the Tesla crash suggest so-so software is duping drivers into not paying attention?

But even if the timely warning was possible, are driving autopilots a good idea in the first place?  In aviation, autopilots do most of the flying and as a result human pilots' ability to fly by hand is compromised.  Recovering flight emergency situations often requires manual flying, which is not the time to discover those skills are lacking.  Specifically, the professional recommendation is:

The SAFO [Safety Alert for Operators], released earlier this month, recommends that professional flight crews and pilots of increasingly advanced aircraft should turn off the autopilot and hand-fly during "low-workload conditions," including during cruise flight in non-RVSM airspace. It also recommends operators should "promote manual flight operations when appropriate" and develop procedures to accomplish this goal.

"Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations," the SAFO notes. "However, continuous use of autoflight systems could lead to degradation of the pilot's ability to quickly recover the aircraft from an undesired state."
The SAFO adds that, though autopilots have become a prevalent and useful tool for pilots, "unfortunately, continuous use of those systems does not reinforce a pilot's knowledge and skills in manual flight operations."

In contrast, driving autopilots are promoted for heavy use, and especially for low-workload driving scenarios.  The ideal situation often casts the driver as a self-absorbed mass transit passenger:

Note the irony of "progress" illustrated by reading a paper (!) book, comfortably sitting with all driving controls out of reach.  And there is more than one irony in play: studying from paper rather than a tablet is associated with better comprehension and retention of the materialPaper also leads to better results than a Kindle.  Why does this picture associate technological improvement with paper books?

Of course the flying environment is very different from the driving environment.  Kids and pets don't run in front of the plane from behind a row of clouds.  Plane collisions are infrequent due to spacious traffic control enforced with multiple radars.  And if you listen to plane mishap recordings, you will notice bad situations develop over comparatively long periods of time.  Limited flight autopilot failures can be tolerated because the entire flying environment is engineered to catch problems before they become unsurmountable.

In contrast, there is no car equivalent to the cockpit's emergency procedure binder.  The driving experience still requires quick judicious action.  Experience with flight autopilots show excessive dependency can result in compromised pilot skills.  So why should the professional advice for planes, with all the implied liability and gravitas, be any different in nature for cars?  And if unattentive drivers do not have the time to recover from an undesired state, why should drivers stop paying attention in the first place?  Tesla's own advice agrees: "be prepared to take over at any time".

For clarity's sake, maybe Tesla's "Autopilot" is better described in terms of "Driver Assist".

For time's sake, maybe traffic and community planning is a better way to curb hours wasted while driving.  As an example, shutting down container shipping terminals increases truck traffic.  Trucks disproportionately wear roads because pavement damage is proportional to the fourth power of weight --- each truck axle carrying 18,000 pounds is equivalent to 10,000 cars.  So, more truck traffic means more road work, which in turn causes even more congestion.  Self-driving vehicles can be irrelevant to traffic.

For everyone's sake, maybe a characterization of the behaviors correlated with fatalities could lead to keeping drivers exhibiting those behaviors off the road.  Ignition interlocks prevent drunk driving 70% of the time according to the CDC, but of course this is invasive for the sober majority.  So instead, a reasonable non-invasive Driver Assist feature could detect unfit driving.  Critically, this approach does not require developing a fully fledged autonomous driving system to be, in some respects, perhaps just as helpful.

Update: it looks like common sense is finally catching on --- the NTSB says fully autonomous cars probably won't happen, many proponents of the technology are running into trouble and/or scaling back expectations, and Tesla just disabled its Autopilot system.

October 19, 2016

Pharo Weekly - Sparta V1.1

I am happy to announce the release of Sparta v1.1 for Pharo 6.
It can be bootstrapped with the following script:
Metacello new
  baseline: 'Sparta';
  repository: 'github://syrel/sparta:v1.1/src';
  load: #file:core
Examples are on class side of: MozExamples, MozTextExamples
(for linux users: if you use 32bit pharo on 64bit linux, sparta will not work, since 32bit plugin depends on 32bit GTK which conflicts with 64bit GTK. Either use 32bit linux or 64bit pharo. I tested sparta with 64bit vm on mac and linux – it works, but some features fail because FFI is not ready)
Release of v1.1 is focused on hardware acceleration, windows support and text rendering.
What is new:
 – Default backends on all platforms changed from software to hardware accelerated.
 – Now also works on Windows! Default backend is Direct2D for drawings and DirectWrite for text. On multi-gpu machines per-app-default setting is respected. In case of Nvidia it can be changed in nvidia control panel. Sparta is x2 faster on discrete gpu than on integrated one.
 – Added initial text support, for instance rendering and high precision measurement.
 – Per-platform settings system is now image based. Allows to enable/disable hardware acceleration, change default backends, change font-mappings tables.
Some text examples:

Pharo Weekly - MiniKanren in Pharo :)

So this isn't a specific out-of-the-box solution to any of the
code-generation use cases people have been discussing, but I thought it was relevant enough to toss it out there.

I'm currently doing some work on program synthesis using a Pharo port of the miniKanren logic programming language ( The Pharo version is here:!/~EvanDonahue/SmallKanren . 

So far, I've only used it to implement this paper on generating scheme
programs (!/~EvanDonahue/SmallKanren), the
implementation of which is here:!/~EvanDonahue/Barliman . 

I've thought a bit about things like generating GUI's, and it seems like
something interesting could be done there, since you could imagine feeding
it constraints and then giving the results a thumbs up/thumbs down to see
other satisfying layouts more or less like the ones you voted on, but the
probabilistic learning component is an object of current research and isn't
ready yet. 

I was planning on sending out an announcement once it was more complete and
robust, as right now it has lots of tests but not much documentation, and
the architecture is changing rapidly, per research requirements, but if
anyone is interested in knowing more about it feel free to get in touch.
This is part of a broader research program on intelligent interfaces, so
hopefully some cooler things come out of it later down the line.


Cincom Smalltalk - MediaGeniX Wins Award from Flemish Government

In an awards ceremony that was held on October 18, 2016 in Brussels, Belgium, Cincom Smalltalk™ partner, MediaGeniX, won the Prijs van de Vlaamse Regering voor de Beloftevolle Onderneming van […]

The post MediaGeniX Wins Award from Flemish Government appeared first on Cincom Smalltalk.

October 18, 2016

ESUG news - [ANN] Zürich Smalltalk Meetup Nov 8th, 2016

There are great news about the upcoming Smalltalker’s Meetup in Zürich.

We’ve found a conference room for the evening and will start off with a “show us your project” session in a room with a big screen and start the social part after your input at a nice Australian Steakhouse. The best thing is: we already have a first session: Michal is going to show his new Gemstone-based persistence framework for Pharo. It can be used to develop in Pharo and keep your objects stored in Gemstone.

We need your input

This new setup with a meeting room adds more value to all of us: we can now not only talk about each others’ epxeriences and porjects, but see what they’re up to and what they found. So to make this event an even better one than before, we need people who’d like to share their ideas, their latest work on some hobby or professional project. There’s no need for a full-fledged sales pitch with fog and visual effects and such, just bring your laptop and show us what you do! The room is reserved for ca. 1 hour, so we can have, say, 2-3 presentations before we leave for beer and steaks. So if you have something interesting to show or would like to find people to join on some early project you’re starting/planning, give us a chance to learn about it. If you’ve built some interesting tool or use Smalltalk for something very special to you, let us know and share your fascination and ideas. We’ll sure appreciate that! But if you spontaneously decide you want to show something or start some discussion, feel free to do so. You just need to be prepared that we might have to leave the room before it’s your turn. Timeslots are limited.

It would be good if you’d pre-announce your demo to me for two reasons: first, we can see if we need to rent tho room for a little longer, but even better, your topic might attract even more people so I can post your topic and make sure people get the message.

So where and when?

We’ll meet at 7pm on Nov 8th, 2016 at

Alpha Sprachwelt AG
Stadelhoferstrasse 10
8001 Zürich

Around 8-8:30 pm we’ll walk over to the Outback Lodge where we’ve reserved a table. With Steaks and Drinks we can discuss ideas, talk about the good old times or do some spontaneous hacking. For directions to the Lodge visit

How to register? How much is it?

This is a meetup of friends, so it’s free to come, we look forward to meeting you and keeping in touch. You pay your meal and drinks, but the rest is free. We ask you to register on our Doodle page at Please also indicate in a comment to your registration if you want to give a short presentation.

Cincom Smalltalk - Cincom® ObjectStudio® 8.8 and Cincom® VisualWorks® 8.2 Are Here!

The Personal Use License Versions Are Now Available! It is our pleasure to bring you the current Personal Use License (PUL) versions of Cincom Smalltalk™.  This major release includes Cincom ObjectStudio 8.8 andCincom […]

The post Cincom® ObjectStudio® 8.8 and Cincom® VisualWorks® 8.2 Are Here! appeared first on Cincom Smalltalk.

Cincom Smalltalk - Zürich Smalltalkers Meeting

Since the Zürich Smalltalkers had so much fun the last few times they’ve gotten together, they have decided that it would be a shame to miss an opportunity to reconvene! […]

The post Zürich Smalltalkers Meeting appeared first on Cincom Smalltalk.

Joachim Tuchel - Zürich Smalltalk Meetup Nov. 8th, 2016

There are great news about the upcoming Smalltalker’s Meetup in Zürich. We’ve found a conference room for the evening and will start off with a “show us your project” session in a room with a big screen and start the social part after your input at a nice Australian Steakhouse. The best thing is: we… Continue reading Zürich Smalltalk Meetup Nov. 8th, 2016

October 17, 2016

Cincom Smalltalk - Smalltalk Digest: October Edition

The Personal Use License Versions of Cincom® ObjectStudio® 8.8 and Cincom® VisualWorks® 8.2 Are Here! It is our pleasure to bring you the current Personal Use License (PUL) versions of […]

The post Smalltalk Digest: October Edition appeared first on Cincom Smalltalk.

Pharo Weekly - ALLSTOCKER internals

Hi all,

>From Torsten, I've received a request to post about some technical
details of ALLSTOCKER ( .
I hope these notes will be interesting to Pharo web developers.

- Seaside / Teapot
We are using Seaside as a main framework for ALLSTOCKER marketplace.
Seaside's component architecture is great for extending application in
an organized way.
ALLSTOCKER prototype was originally composed of only 3 class
categories. Now these were gradually grown to 70 categories. But we
still feel that they are manageable.
We also use Teapot for building Web-based API in a quick way. Recently
we've built webhook handlers for integration with other services.

- Templating with Mustaside
We need a lot of responsive-design web pages for supporting various
mobile devices. (It is important especially for Southeast Asian
countries, where tablets are popular than PCs).
We would like to adopt existing Twitter Bootstrap templates for saving
time. So, Mustaside was our choice.!/~MasashiUmezawa/Mustaside
Before Mustaside, there were a lot of noisy #div: sends in our code.
Now they are gone.

- Localization
Our business target is world-wide. So localization is very important
topic. Currently ALLSTOCKER supports 4 languages and we will add
Chinese languages soon.
Translation strings are not only in Smalltalk code, but also in
Mustache templates. So we selected Soup for extracting translatable
strings in those templates.!/~PharoExtras/Soup
For managing translations, we use Gettext package.!/~PharoExtras/Gettext

- Databases
For transactional data, we chose Glorp. Although there are mapping
costs, we prefer RDB (Postgres). It is reliable for handling precious
order-related data.
However, for supporting complex search of machines, we use Neo4j - a
graph database. It supports very powerful query language called
We can avoid complex table joins and get aggregated results faster.!/~MasashiUmezawa/Neo4reSt

- Keyword search
ALLSTOCKER supports free keyword search. We selected Elasticsearch for
search-engine. Elasticsearch has elaborated searching facilities and
those are easily accessible via REST API.
We have extended the existing Elasticsearch client for Pharo 5.

- Deployment
We are using AWS Elastic Load Balancer and running Nginx as a
front-end web server. Two back-end Pharo images are running and
load-balanced with sticky sessions.
It was sort of difficult to find the appropriate simultaneous number
of database connections and Pharo processes. We feel ALLSTOCKER is
pretty stable for now, but we need to adjust more for expanding our

Best regards,
— [:masashi | ^umezawa]

Andres Valloud - NSA's August 2016 puzzle periodical problem

You can see the statement of an interesting problem I heard recently here.  Basically, it has two parts: a simpler stage, and a more complex setup.

The simpler stage is as follows.  Players A and B both take a card from a standard 52-card French deck and put it face up on their foreheads.  A can only see B's card, and vice versa.  Their task is to guess the color of their own card.  They cannot communicate with each other, and must write down their guesses at the same time.  If at least one of them guesses correctly, they both win.  Is there a strategy that always wins?

The more complex setup has four players, and now they must guess the card's suit.  If at least one of them guesses correctly, they all win.  Is there a strategy that always wins in this case?


if you want to have at the problem, stop here

So you are still there?  Did you really make an honest attempt at the problem?


Ok, so you want to read on.  Fine :).

First, the simpler stage.  Clearly, treating both players as identical doesn't go anywhere.  But if one considers that A's identity is different than B's, one can also assume they behave differently in response to each other's card.  That is, the card they see is a selector for some behavior.  Moreover, both players may have different responses to the same messages.

This looks a lot like ECC with XOR or parity, and RAID disk arrays.  A and B could be stand-ins for error recovery mechanisms that try to reconstruct missing data.  If at least one guesses correctly, together they recover the unknown card.  This metaphor is not precise enough, but it suggests the following strategy.

For example, let's say A guesses A's card is always the same color as B's card (which A can see).  Now if B behaves the same that's not good enough --- so let's have B always guess a color different than that of A's card (which B can see).  Let's say color black is 0, and color red is 1.  Moreover, let CX stand for player X's card color.  With this convention, the approach boils down to:
  • A plays 0 xor CB
  • B plays CA xor 1
A quick check (by hand, the truth table has 4 entries) shows the players always win with this strategy.

If that's what's going on for two colors and two players, what could be a reasonable guess for 4 suits and 4 players?  Well, let the suits be represented by 0, 1, 2 and 3, and further let CX now stand for player X's card suit.  Then,
  • A plays 0 xor CB xor CC xor CD
  • B plays CA xor 1 xor CC xor CD
  • C plays CA xor CB xor 2 xor CD
  • D plays CA xor CB xor CC xor 3
And again, a quick check (with code, the truth table has 256 entries) shows the players always win with this strategy too.

October 14, 2016

Benoit St-Jean - Gnochon

Thanks to Twipply, ZirconiumX and JoshS (regulars of ##chessprogramming on IRC), I finally decided to go ahead with my chess engine named Gnochon!  At first, development will be slow as I am still working on Freewill and plan on finishing it before mid-November.

In case you asked, Gnochon is a French slang word in Quebec meaning someone *really* stupid!

gnochon(click on image to enlarge)


Classé dans:échecs, Chess, Pharo, Smalltalk, Squeak Tagged: Chess, Freewill, Gnochon, Pharo, Smalltalk, Squeak

October 13, 2016

Pharo Weekly - New version of Iceberg

Hi, we are releasing a new version of Iceberg, with several new features and bugfixes. I would’t yet say that is 100% production ready, but it is close, so I want to invite you to test it and provide feedback.
You can install it by doing:

Metacello new
  baseline: ‘Iceberg’;
  repository: ‘github://npasserini/iceberg’;
More installation instructions and documentation can be found at
Some of the new features in this version are:
– Allow to commit several packages together in the same commit.
– Show diffs for incoming and outgoing commits (i.e. before push/pull you can browse the difference between the remote and the local versions).
– New History view allows to see any commit in any branch and compare it to the current loaded version.
– Better support for interacting code loaded outside Iceberg (smaltalkhub, filetree, gitfiletree, etc).
– From the diff view, revert changes or browse them (i.e. open a Nautilus on the changed class/method).
– Automatically update presentations on code / repository changes.
– Integration with Metacello, i.e. after installing Iceberg you do something like
Metacello new
  baseline: 'TaskIt';
  repository: 'github://sbragagnolo/taskit';
(By default) it will be loaded using iceberg (there is a setting to avoid it if you prefer traditional behavior.
– Improved handling of git errors.
– Improved performance for several operations.
– Improved documentation.
– … and several bug fixes and other minor improvements (please look at for more details).
Please do not hesitate to contact me if you have any doubts.

October 12, 2016

ESUG news - UK Smalltalk User Group Meeting - Monday, October 24th

The next meeting of the UK Smalltalk User Group Meeting will be on Monday, October 24th.

We'll meet at our usual venue The Counting House ( at 7pm.

If you'd like to join us, you can just show up at the pub. You can also sign up in advance at the meeting's Meetup page: .

October 11, 2016

UK Smalltalk - UK Smalltalk User Group Meeting - Monday, October 24th

The next meeting of the UK Smalltalk User Group will be on Monday, October 24th.

We'll meet at our usual venue, the Counting House, from 7pm onwards.

If you'd like to join us, you can show up at the pub. You can also sign up in advance on the meeting's Meetup page.

October 10, 2016

Hernán Morales Durand - Territorial: A new package for Geographical Information Retrieval for Smalltalk

Territorial is a Smalltalk library for Geographical Information Retrieval (GIR) in geopolitical objects. It was originally designed for a Phylogeographic Information Retrieval system based in BioSmalltalk. There will be no scripts in this post, everything is explained in the Territorial User Manual (PDF). The Territorial library has two locations: SmalltalkHub is where I will commit latest

Pharo Weekly - Pillar Pharo Integration


Pillar now ships with a text editor that also features a syntax highlighter.
So, now, if you load the development version of Pillar:
Gofer new 
    smalltalkhubUser: ‘Pier’ project: ‘Pillar’;

You will have an extra presentation when inspecting a .pillar file:pillar-editor.png
The new thing here is that the highlighter is based on the Pillar PetitParser, and it is extensible for highlighting more parts if needed. The highlighting also can support actions. For example, the picture above shows the file to the right after clicking on the reference.
Please take a look and let me know what you think.

October 08, 2016

Pharo Weekly - Another week of enhancements

19107 Rule “Eleminate unnecessary nots” suggest to introduce some bugs

19182 LinkedList>>#select:thenDo: is slow

19181 TabManager should not stale focus from other widgets when tab content is ready

18901 Highlight message send selector on mouse over
19178 TabBar selectLastTab is not working

19180 Var lookup broken

19166 Why Object>>#-> doesnt use Association class>>#key:value:?

19157 The method Pragma>>#selector called from MetacelloToolBox>>#modifySymbolicVersionMethodFor:symbolicVersionSpecsDo: has been depr

19175 LongTestCase cleanup

19177 SharedQueue does not implement #atEnd and #contents
19173 *** Warning: Deprecation: The method GTSpotter>>#isEmpty called from GTSpotterTest>>#assertText:do: has been deprecated.

19170 The method Pragma>>#selector called from Parser class>>#primitivePragmaSelectors has been deprecated.

19149 spelling FFICalloutAPITests>>ffiTestContantFormat: format to:

19174 UI process should be restarted to support multi windows

19072 Integrate support for opening an extra Morphic world in an OSWindow



– case 19032

– case 19127

– fixed #name deprecation in Announcer>>#gtInspectorSubscriptionsIn:

– debuggers uses keymaps for specifying shortcuts instead of characters

– sync with the changes done in Pharo through slices
18929 FastTable has a little blank on the left

19161 Failing tests in Kernel due to FullBlockClosure

19131 Deprecation: The method Object>>name called from ChangeSet>>#changeRecorderFor: has been deprecated.

18979 Ignore some kernel and Network-UUID dependencies for the bootstrap

19165 Integrate Epicea 8.0.3
19162 fix a deprecated send in SettingNode

19130 Deprecation: The method Object>>name in Slot
19155 speedup instVarNamed:

19158 The method BitBlt class>>#current called from Form>>#rotateBy:magnify:smoothing: has been deprecated.
19070 Support for multiples Morphic Worlds
18032 mightBeASquare should be implemented in LargePositiveInteger

19154 Move to RPackage logic to retrieve actual class tags

19100 Senders of with Full closures

19145 FFI Callback return values for Enumerations should return their integer value

19144 (FloatTest >> #testBinaryLiteralString) method cannot be compiled in latest image

19132 simplify Deprecation class
19121 DNU in VersionBrowser when switching from “side by side” to “source”

19143 in Code Critique browser: DNU #setTextModelForTransformationRule:

18914 Improve MethodClassifier classification based on selector parts

19141 Optimize String>>#substrings:

19140 asClass should be packaged in ScriptingExtensions

19139 Method comment of yourself should be really improved

11657 remove category from SyntaxError, do not request it in SyntaxErrorDebugger from exception

Pharo Weekly - Tealight: Getting more out of Teapot


I wrote a small extension "Tealight" for the Teapot framework that makes it 
even easier to experiment with web based interfaces/web calls into Pharo 
running on the server side.

It additionally allows you to easily define and generate a simple or versioned web
interface for your own apps.

With this extension REST  annotated methods like

   greeting: aRequest
      <REST_API: 'GET' pattern: 'hello'>
      ^'HelloWorld from Pharo'

are transformed into dynamic Teapot routes and can be accessed easily via web.

You can use two pragmas:

  #REST_API:pattern:          for standard APIs
  #REST_API:versions:pattern  for versioned APIs

Full docu explaining how to use it is added on

It also shows the new custom "Teaspoon" inspector extension tool implemented by
Attila Magyar - which is really cool to experiment and call the web methods 
without a web browser or Zinc scripts.

So far there is no config for Tealight for the catalog yet, will add this soon.
So for the time being you need to load the latest version via

  Metacello new 
    repository: 'github://astares/Tealight/repository';
    baseline: 'Tealight';

to follow the docu description.

Have fun!


October 07, 2016

ESUG news - The Videos of ESUG 2016

The Videos of ESUG 2016 are now online!

Playlist Youtube Main Conference on Youtube

We added a more detailed list of all talks to the website with links to videos and slides:

Sadly some videos where lost. A list of all videos missing can be found here.

October 06, 2016

Joachim Tuchel - Expressions you’d probably never type

… and yet they give the expected result. I just tried this one, assuming it’s probably worth a try, but it will most likely not work: String with: Character cr with: Character lf. Not that this is interesting or such. I was just surprised you can use with:with: to create Strings. Go back to work,… Continue reading Expressions you’d probably never type

Clément Béra - Copying and clean blocks

Hi folks,

A couple weeks ago the new closure implementation I co-designed and co-implemented with Eliot Miranda has reached the Pharo 6 alpha image. We’re now looking into failing tests to be able to use them by default in Pharo 6. That should happen before the Pharo 6 freeze, mid-november.

The new closure design, called FullBlockClosure, allows among other thing the implementation of the Copying and Clean block optimizations. The VM support is already there and only in-image development is required (especially Opal compiler changes).

Performance problem

The main performance problem with blocks is the allocation of multiple objects. The creation of a block in Pharo requires the allocation of:

  • the FullBlockClosure object.
  • the outerContext object if it’s not already allocated (for example by another block creation in the same context).
  • a temp vector to represent efficiently some remote variables, if required by the semantic analysis performed at bytecode compilation time.

In total, a block creation requires between 1 and 3 object allocations, with on average 2 allocations.

The only allocation optimized to the maximum potential is the tamp vector allocation: it’s allocated only when needed and has no side-effect if it’s not present. However, the two other allocations (the outer context and the closure) are always performed while they may not be required. The two optimizations I am going to describe avoid allocating these objects. On the contrary to the temp vector, these optimizations may have a cost as some debugging features may not be available anymore.

Copying block

With FullBlockClosures, the closure’s outerContext is needed *only* to perform non local returns. Hence, if the bytecode compiler detects at compilation time that the closure does not have a non local return, it can generate a different bytecode instruction to create the closure without allocating the outer context.

It’s difficult to give precise estimate as it depends on the programmer’s style, but usually less than 5% of closures present in the code have a non local return. This optimization therefore allows most closures (> 95%) to be more efficient by allocating only the closure and if necessary the temp vector at closure creation time, without allocating the outer context.

Clean block

Some closures not only do not have non local returns, but in addition do not use any remote variables. In this case, the closure is in fact only a simple function. If the bytecode compiler detect such a closure, it can create the FullBlockClosure instance at compilation time, avoid all allocations at runtime.

In practice, though depending on the programmer’s style and application, usually 30% of closures do not access any remote temporary variable nor perform any non local return. This optimization provides a huge speed-up for these closures.

This optimization can also be used to write code in places where object allocation is not allowed (for real-time librairies, etc…)

Issue 1: debugging

Copying blocks and clean blocks do not have any reference to their outer context as the reference is not used for normal execution.

However, when the programmer edits code from a closure activation in the debugger, the debugger either shows “method not found on stack” in rare cases if the outer context is dead, or restarts the home context code. For copying and clean blocks, the outer context reference is not present, hence the debugger would always display “method not found on stack”.

The debugger can be improved to have some work-arounds, but it leads to non-obvious bugs while debugging.

So, the question is, do we want closure performance over this kind of debugging ? Is this debugging feature deeply used or not ?

Other Smalltalk runtimes have the optimization and it does not seem anybody is complaining.

Issue 2: IDE tools

In the case of clean blocks, a compiled code literal frame can now hold FullBlockClosure instances which was not possible in the past. This leads to some complications, as for example if one wants to scan both for the method and its inner blocks bytecodes, it needs to looks for FullBlockClosure literals, reach the compiled block from there, etc.

Issue 3: Block identity

Another problem with clean block is identity. Normally a method answering a block answers a different instance at each execution, which is not the case for clean blocks.

^ [ ]

Example new cleanBlock == Example new cleanBlock

The DoIt answers true with the clean block optimization and false without it.


I will add these optimizations as an option (not activated by default), then I will evaluate the performance and reconsider.

In the context of my work with the runtime optimization, it seems copying blocks are interesting as they drastically reduce the number of deoptimization metadata in multiple cases, while not making anything more complex.

On the other hand, clean blocks have more drawbacks that copying blocks and make things more complex in the optimizer (like handling FullBlockClosure literal inlining).

Pharo Weekly - HTTP Tracing

HTTP tracing was introduced in Go but

It has been part of the standard Pharo image for a year or two (see category Zinc-HTTP-Logging).

ZnLogEvents [ ]

Using ZnLogEvents and GT Tools to look at HTTP traffic behind Monticello [ ]

Pharo Seaside : Looking at HTTP Traffic [ ]

October 05, 2016

Torsten Bergmann - Pharo on VISSOFT 2016

The fourth IEEE Working Conference on Software Visualization (VISSOFT 2016). If you are on Facebook then check out the page to see how Pharo is used for Visualizations

October 04, 2016