Planet Smalltalk

August 17, 2017

ESUG news - UK Smalltalk User Group Meeting - Monday 21st August

The next meeting of the UK Smalltalk User Group Meeting will be on Monday, 21st August.

We'll meet at a new venue The City Pride at 7pm.

If you'd like to join us, you can just show up at the pub. You can also sign up in advance at the meeting's Meetup page:

Cincom Smalltalk - Smalltalks 2017 Registration and Call for Participation

Smalltalks 2017 Registration and Call for ParticipationFAST (Fundación Argentina de Smalltalk) invites you to attend and participate in the 11th Smalltalks 2017 Conference. This popular conference focuses on Smalltalk-based technologies, […]

The post Smalltalks 2017 Registration and Call for Participation appeared first on Cincom Smalltalk.

August 16, 2017

Cincom Smalltalk - Smalltalk Digest: August Edition

The August Edition of the Cincom Smalltalk Digest.

The post Smalltalk Digest: August Edition appeared first on Cincom Smalltalk.

Pharo Weekly - Pharo 70 is starting to roll for real :)

[Pharo 70 ] build 40 – PR 133 FileReference-EnsureCreateFile

[Pharo 70 ] build 39 – PR 210 20165/Support-segment-path-printing

[Pharo 70 ] build 38 – PR 199 FileReference-size-needs-a-comment

[Pharo 70 ] build 37 – PR 206 File names should not be canonicalised by default

[Pharo 70 ] build 36 – PR 204 Improve-comment-of-SmallDictionary

[Pharo 70 ] build 35 – PR 172 Implement-WeakIdentityValueDictionary

[ Pharo 70 ] build 32  – PR 73

[ Pharo 70 ] build 30  – PR 69

[ Pharo 70 ] Build 29 – PR 191 20302 world menu help should have a comment

[Pharo 70] Build 28 / PR 194

[Pharo 70] Build 26 / PR 192

[Pharo 70] Build 27 / PR 103

Pharo Weekly - “codeAndMusic choses Pharo” testimony

August 14, 2017

Pharo Weekly - Pharo: a soup of stones with great developers.

I always loved the soup of stones. To me it captures a key aspect of the philosophy behind Pharo. Sharing to empowering us all. And it is working!



Pharo Weekly - Git: keeping your repo in sync

August 11, 2017

Pharo Weekly - Glance at the new Pharo70 git-based dev process

August 10, 2017

Pharo Weekly - How to contribute to Pharo 70

UK Smalltalk - UK Smalltalk User Group Meeting - Monday, 21st August

The next meeting of the UK Smalltalk User Group will be on Monday, August 21st.

We'll meet at a our new venue, the City Pride, from 7pm onwards.

If you'd like to join us, you can show up at the pub. You can also sign up in advance on the meeting's Meetup page.

Pharo Weekly - [Pharo 70 alpha] Integration log restarts


We do not have yet an automated mail after each integration. So I will
do it by hand. I’m going over the green issues that you can see here:

You can comments them in

Here are the current integration items

[Pharo 70] Build 23 / PR 132

[Pharo 70] Build 22 / PR 169

[Pharo 70] Build 21 / PR 187

[Pharo 70] build 19 / PR-168

[Pharo 70] Build 18 / PR 66

[Pharo 70] Build 17 / PR 185

August 08, 2017

Torsten Bergmann - Live Programming in Smalltalk development environments survey

There is a survey about how Smalltalk software developers use live programming features in practice. Nice!

Benoit St-Jean - Smalltalk en Afrique

Il y a maintenant un groupe de discussion Google pour les développeurs Smalltalk africains, le African Smalltalk User Group.

Bienvenue à tous nos nouveaux confrères!  Au plaisir de vous jaser!

Classé dans:Smalltalk Tagged: African Smalltalk User Group, Afrique, ASUG, discussion, forum, Google group, Smalltalk

August 06, 2017

Pierce Ng - DataFrame and Simple Linear Regression

In the previous post I mentioned using Iceberg successfully. The code I was pushing is SLRCalculator, a simple linear regression calculator, written to take Oleksandr Zaytsev's DataFrame library for a spin, by way of porting Jason Brownlee's excellent simple linear regression in Python to Pharo.

Firstly, install DataFrame. This also pulls in Roassal.

Metacello new
  baseline: 'DataFrame';
  repository: 'github://PolyMathOrg/DataFrame';

SLRCalculator implements mean, variance, covariance, coefficients etc, and also incorporates the Swedish automobile insurance dataset used by Jason in his Python example.

SLRCalculator class>>loadSwedishAutoInsuranceData
  | df |

  df := DataFrame fromRows: #(
            ( 108 392.5 )
            ( 19 46.2 )
            ( 13 15.7 )
            "more lines" ).
  df columnNames: #(X Y).
  ^ df

The computation for covariance also uses DataFrame.

covariance: dataFrame
  | xvalues yvalues xmean ymean covar |

  xvalues := dataFrame columnAt: 1.
  yvalues := dataFrame columnAt: 2.	
  xmean := self mean: xvalues.
  ymean := self mean: yvalues.	
  covar := 0.
  1 to: xvalues size do: [ :idx |
    covar := covar + (((xvalues at: idx) - xmean) * ((yvalues at: idx) - ymean)) ].
  ^ covar 

Let's see how to use SLRCalculator to perform linear regression, with graphing using Roassal. First declare the variables and instantiate some objects:

| allData splitArray trainingData testData s coeff g dsa dlr legend |

s := SLRCalculator new.

allData := SLRCalculator loadSwedishAutoInsuranceData.

Next, split the data set into training and test subsets. Splitting without shuffling means to always take the first 60% of the data for training.

splitArray := s extractForTesting: allData by: 60 percent shuffled: false.  
trainingData := splitArray at: 1.
testData := splitArray at: 2.
coeff := s coefficients: trainingData.

Set up for graphing. Load `allData' as points.

g := RTGrapher new.

allData do: [ :row |
  dsa := RTData new.
  dsa dotShape color: Color blue.
  dsa points: { (row at: 1) @ (row at: 2) }.
  dsa x: #x.
  dsa y: #y.
  g add: dsa ].

Create the points to plot the linear regression of the full data set, using the coefficients computed from the training subset.

dlr := RTData new.
dlr noDot.
dlr connectColor: Color red.
dlr points: (allData column: #X).
" y = b0 + (b1 * x) "
dlr x: #yourself.
dlr y: [ :v | (coeff at: 1) + (v * (coeff at: 2)) ].
g add: dlr.

Make the plot look nice.

g axisX noDecimal; title: 'Claims'.
g axisY title: 'Total payment in SEK'.
g shouldUseNiceLabels: true.
g build.

legend := RTLegendBuilder new.
legend view: g view.
legend addText: 'Swedish Automobile Insurance Data Linear Regression'.
legend build.

g view

Putting the code altogether:

| allData splitArray trainingData testData s coeff g dsa dlr legend |

s := SLRCalculator new.

allData := SLRCalculator loadSwedishAutoInsuranceData.
splitArray := s extractForTesting: allData by: 60 percent shuffled: false.  
trainingData := splitArray at: 1.
testData := splitArray at: 2.
coeff := s coefficients: trainingData.

g := RTGrapher new.

allData do: [ :row |
  dsa := RTData new.
  dsa dotShape color: Color blue.
  dsa points: { (row at: 1) @ (row at: 2) }.
  dsa x: #x.
  dsa y: #y.
  g add: dsa ].

dlr := RTData new.
dlr noDot.
dlr connectColor: Color red.
dlr points: (allData column: #X).
" y = b0 + (b1 * x) "
dlr x: #yourself.
dlr y: [ :v | (coeff at: 1) + (v * (coeff at: 2)) ].
g add: dlr.

g axisX noDecimal; title: 'Claims'.
g axisY title: 'Total payment in SEK'.
g shouldUseNiceLabels: true.
g build.

legend := RTLegendBuilder new.
legend view: g view.
legend addText: 'Swedish Automobile Insurance Data Linear Regression'.
legend build.

g view

Copy/paste the code into a playground, press shift-ctrl-g...

August 05, 2017

Pharo Weekly - Moldable brick editor – alpha


We are very happy to announce the alpha version of a moldable editor built in Brick ( which is based on Bloc ( This is primarily the work of Alex Syrel. The project was initially financially sponsored by ESUG and it is currently supported by feenk. And of course, the project is based on the tremendous work that went into Bloc and Brick by all contributors.

Take a look at this 2 min video:

The basic editor works and it is both flexible and scalable. For example, the last example shown in the video is an editor opened on 1M characters, which is reasonably large, and as can be seen see one can interact with it as smoothly as with the one screen text. It actually works just as fine with 100M characters.

The functionality of the editor includes: rendering, line wrapping, keypress and shortcut handling, navigation, selection and text styling. Currently, the editor is 1260  lines of code including method and class comments. This is not large for a text editor and this is possible because most of the work is done by generic concepts that already exist in Bloc such as layouts and text measurements. Beside the small maintenance cost, the benefit is that we have the option to build all sorts of variations with little effort. That is why we call this a moldable text editor.

Another benefit of using elements and layouts is that we can also embed other kinds of non-text elements with little effort (such as pictures), and obtain a rich and live text editor. We already have basic examples for this behavior, and we will focus more in the next period on this area.

The next immediate step is to add syntax highlighting. Beside the text attributes problem, this issue will also exercise the thread-safety the implementation is. The underlying structure ( is theoretically thread-safe, but it still needs to be proven in practice.

We think this is a significant step because the editor was the main piece missing in Brick and it will finally allow us to build value that can be directly perceived by regular users on top of Brick and this, in turn, will generate more traction. Please also note that because now Bloc is directly embeddable in Morphic it means that we can actually start using it right away. For example, the picture below shows the text element being shown through a live preview in the GTInspector.


This is another puzzle piece towards the final goal of engineering the future of the Pharo user interface. There is still a long way to go to reach that goal, but considering the work that is behind us, that goal that looked so illusive when Alain and Stef initiated the Bloc project is now palpable.

We will continue the work on this over the next period and we expect to announce new developments soon.

If you want to play with it, you can load the code like this (works in both Pharo 6 and 7):
Iceberg enableMetacelloIntegration: true.
Metacello new
baseline: ‘Brick’;
repository: ‘github://pharo-graphics/Brick/src’;
load: #development

Please let us know what you think.

Alex and Doru

Pierce Ng - Iceberg and SSH keys

On a laptop that I've just rebuilt recently, I created and have been using an ED25519 SSH key pair, including with Github. Iceberg doesn't work with it though, throwing the error 'LGit_GIT_ERROR: No ssh-agent suitable credentials found". This is because Iceberg uses libgit2, which uses libssh2, which apparently doesn't support ED25519 keys. Created a new RSA key pair, registered it with Github, and Iceberg works.

  • OS = Linux Mint 18.1
  • Pharo image = 60510-64
  • Pharo VM = pulled from GH opensmalltalk-vm today and built on said laptop

August 04, 2017

Smalltalk Jobs - Smalltalk Jobs – 8/4/17

3 jobs in Germany:

  • In North Rhine-Westphalia there is a 6 month contract with possibility of extension.
  • In Cologne there are looking for what looks like a permanent in-house Visual-Works engineer.
  • In Frankfurt there is another 6 month contract that wants both English and German.

Good luck with your job search!

Filed under: Employment

Pharo Weekly - Live coding Survey

Dear Pharoers,

Live programming frees developers from the “edit-compile-run” loop and allows people to interact with running programs very easily. Live programming is getting popular, but many of its features have been present in Smalltalk for a very long time.

We want to understand how Smalltalk software developers use live programming features in practice. We would be grateful if you could participate in our 10-minute survey on this subject:

As a thank you for your participation, you will be able to participate in a raffle to win a Smalltalk book of your choice. If you wish to participate, you will need to share your email with us, so that we can contact you.

We will appreciate if you share the survey
By participating in the survey you will:
– help me to successfully finish my PhD,
– push Smalltalk awareness in Live Programming research community,
– bring new integrated electronic communication ideas, and
– bring new ideas to improve our Smalltalk Live Programming experience 🙂
We will close the survey on Wednesday, August 9, 2017 AoE.

Thank you and we really hope you enjoy participating in our survey!
Juraj Kubelka, PhD Student at the University of Chile
Romain Robbes, Professor at Free University of Bozen-Bolzano
Alexandre Bergel, Professor at the University of Chile

August 02, 2017

Stefan Marr - #bikexit17, Hello England, Goodbye EU?

Academics are infamous for their project names and abbreviations, so, let’s call this #bikexit17…

As some people already know, I am starting a new position in October. I will be a lecturer at the University of Kent as part of the Programming Languages and Systems group. While I am pretty excite about the opportunity to work with a very interesting group, it reminded me of all the little annoying things I went through when moving from Lille to Linz. One of them was shipping my bicycle. Last time, I was already thinking, well, why not ride it instead of shipping it?

And that’s where we are this time around. I got a few holidays I have to take anyway. And, I could use a real offline holiday for a change. Especially, since I suspect the start in Canterbury to be a bit stressful.

It should be possible to do the whole trip with about 11 days of cycling. Currently, my plan is as follows:

  • Day 1: Linz -> Passau, a brief 90km to warm up (Sept. 9)
  • Day 2: Passau -> Regensburg, ca. 140km (Sept. 10)
  • Day 3: Regensburg -> Ansbach, ca. 140km (Sept. 11)
  • Day 4: Ansbach -> Mosbach, ca. 130km (Sept. 12)
  • Day 5: Mosbach -> Heidelberg -> Mannheim -> Worms, somesightseeing and ca. 90km (Sept. 13)
  • probably a day of rest

  • Day 6: Worms -> Koblenz, ca. 150km (Sept. 15)
  • Day 7: Koblenz -> Bonn or Cologne, ca. 70-100km (Sept. 16)
  • Day 8: Bonn/Cologne Maastricht, ca. 140km (Sept. 18)
  • Day 9: Maastricht -> Brussels, ca. 100km (Sept. 19)
  • spending a couple of days in Brussels and around

  • Day 10: Brussels -> Lille, ca. 130km (Sept. 21)
  • perhaps another day in Lille
  • Day 11: Lille -> Canterbury, ca. 130km (Sept. 23)

If I am passing by your region, I’d be happy to have some company along the way. Will probably start out with a friend to go to Passau, and might have company around Bonn. Flanders should also be a very nice area for cycling 😉

Since this is my first trip of this magnitude, I am also happy for any tips, recommendations, does/dont’s, you might be willing to share. Got already some, but wouldn’t mind more.

And, just to be clear: such adventures are drastically simplified by being able to travel without passport, or roaming charges. So, thank you EU. Thank you very much!

August 01, 2017

Pharo Weekly - Another supercool enhancement in bootstrap

we are checking a huge pull request #177 ( that will change some basics of the bootstrap process:
Now we will bootstrap a smaller image that will not include compiler/parser. Compiler and related packages are exported and loaded using a binary exporter named Hermes.
The compiler is then used to load FileSystem and Monticello. The rest of the bootstrap process will be the same as before.
As the result we will have faster bootstrap and better system modularization and possibilities.
It required some modularization efforts:
– simplified initialization scripts
– Use Zinc converters and encoders instead of FilePathEncoder and old TextConverter
– Use Stdio instead of FileStream
– Using File instead of FileSystem
– Deprecated FileStream & childs (Moved to Deprecated70)
– Extracted Path classes to their on package: FileSystem-Path
– Moved OpalEncoders to their own package. They are required by the runtime (not only for compilation)
– Introduced AsciiCharset in the kernel to answer to #isLetter #isUppercase and so on without requiring full Unicode tables from the beginning
– Cleaning up a bit the full exception logging infrastructure (streams, transcript, files, stack printing…)
– Split Ring methods required for system navigation to the Ring-Navigation package
– Remove usages of #translated in the kernel
– Refactored the bootstrapping classes to remove duplications
– Cleaning up dependencies in CompiledMethod>>printOn:
– fix path printing
We need to merge these changes at once and of course it can cause some conflicts with the existing pull requests or external code. Anyway, we need to merge it as soon as possible.
So please, try to look at the PR and test the resultant image [1] to avoid some major problems.

July 30, 2017

Pharo Weekly - How to publish github managed project to Pharo Catalog

Hi Norbert,

I manage some of my projects already in GitHub. For example Tealight which is also in catalog.

Anything you have to do is
1) to create a “tag” in git  (see, I name the tags with the according version number like 0.0.2 following semantic versioning (
2) provide a ConfigurationOf (as you had in the past) where the “version” references the “git tag” with the same name
a) also #stable has to point to the “version” as it was in the past
optional:  b) the #development should point to the Baseline in the git branch that you typically use for development (this allows for loading of bleeding edge as before)
3) upload the Configuration to a MetaRepo as before to appear in catalog

1) I have two tagged versions for Tealight on Git (0.0.1 and 0.0.2)

2) In my ConfigurationOfTealight (which I also manage in git) I reference the tag, for example in version 0.0.2 I reference the tag with the same name “github://astares/Tealight:0.0.2/repository”:

Side note: a) now you can use your #stable definition in the ConfigurationOf as before
b) Your #development definition should point to the master branch (or whatever the development branch is)


Iceberg is managed in a similar way (but is now included in the image and the catalog part is only for compatibility).

Hope this helps.


Smalltalk Jobs - Smalltalk Jobs – 7/29/17

  • Anywhere (remote)Senior ControlWorks Software Engineer through Lam Research
    • Required Skills:
      • 7+ years of experience in Software development using ControlWorks (Rudolph) Framework for Semiconductor Equipment
      • Expert knowledge in ControlWorks for one of the following fields: Process Modules, Transfer Chambers or Wafer Scheduling
      • Excellent knowledge of Smalltalk in combination with ControlWorks framework. (Preferred in Visual Works 7.4)
      • Excellent knowledge of Object Oriented Software implementation and design
      • Design Patterns and Finite State Machines
      • Knowledge of multithreading
      • Strong software engineering skills: modular design, data structures and algorithms
      • Experience with Source Control and Development life cycles
      • Analytical approach to root cause analysis
      • B.S or Masters in Computer Science, Computer Engineering, Electronics or related field
      • Ability to travel within USA, Europe, and Asia (total is less than 10%)
      • Telecommuting would be an option for this position
    • Wanted Skills:
      • Knowledge of preemptive thread scheduling of VisualWorks Smalltalk
      • CVS
      • STORE
  • Chennai, IndiaLead Software Development Engineer through The Glove
    • Required Skills:
      • 5 to 11 years of experience with Smalltalk
      • SmallTalk VSE 3.2
      • Cincom SmallTalk Viualworks 7.9
      • Ability to design, develop, integrate and deploy in Smalltalk technology (minor / major enhancements)
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed

Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

July 28, 2017

BioSmalltalk - BioSmalltalk workflow for ancestral haplotype analysis published in Animals


Recently we published in the Animal Journal, an article analyzing Bovine Lymphocyte antigen (BoLA) region of the Brangus cattle. We used LAMP-LD (Local Ancestry Inference in Admixed populations) which is a window-based algorithm combined within a hierarchical Hidden Markov Model to represent haplotypes in a population and allows to estimate enrichment from two parental populations. The ancestral haplotype is the one which appeared first in the parental populations. As we do not know which one it is, we assume it is the most abundant combining both of them, and considering its association to the mixing ratio.

In the script below, we analyzed MHC regions (this is chromosome 23 for Cattle) and generated a histogram plot on annotating the ranges on these regions. The script was adapted for general usage, and each step is commented.


For replicating the workflow into your Bioinformatics experiments, you will need the reference alleles of your SNPs. Here, we extracted them from the Affymetrix TXT report, exported from the Axiom Analysis Suite software. The PLINK wrapper is used to filter families of interest and genotyping error rate. Our cattle families are labeled as 'AA' for Angus, 'BRANG' for Brangus and 'BRAH' for Brahman respectively. If you do not have family information in your PED file, then you will have to add it manually or using tools like UNIX command-line utilities (cut, paste, etc). We also calculated the effective population size for each family (which is commonly referred as "Ne" in the literature) and used as parameter of ShapeIt.

Plotting was performed using the ROASSAL visualization engine, for which I should thank to Alexandre Bergel for its tireless help with their impressive API.

| shapeItSamplesDir baseDir reportFile alleleAFile plinkFile |

" Base directory for all input and output files "
baseDir := '.'.
" Axiom Analysis Suite TXT Report "
reportFile := baseDir , 'AAS_report.txt'.
alleleAFile := baseDir , 'allele_A.txt'.
plinkFile := baseDir , 'Bos1_REF-FINAL'.

" Extract Allele_A column from Affymetrix report file to use it as reference alleles "
BioAffyTXTFormatter new
inputFile: reportFile;
outputFilename: alleleAFile;

" Build the PLINK BED file with reference alleles "
" The SNPs_BoLA_Bos1-annotdb32.txt is just a text file used to extract the SNPs of interest in the MHC region from the Microarray output "
BioPLINK2Wrapper new
bfile: plinkFile;
out: baseDir , 'Bos1_REF-FINAL_REFA';
referenceAlleles: alleleAFile;
extract: baseDir , 'SNPs_BoLA_Bos1-annotdb32.txt';

" Apply families of interest filter "
BioPLINK2Wrapper new
bfile: baseDir , 'Bos1_REF-FINAL_REFA';
out: baseDir , 'Bos1_REF-FINAL-AA-BRAH-BRANG_r1';
keepFams: #('AA' 'BRANG' 'BRAH');

" Apply basic genotyping error rate filter "
BioPLINK2Wrapper new
bfile: baseDir , 'Bos1_REF-FINAL-AA-BRAH-BRANG_r1';
out: baseDir , 'Bos1_REF-FINAL-AA-BRAH-BRANG_Atsm_GINO-chr23_MHC';
geno: 0.05;

" Regenerate BED for duplicated BRAH lines "
" Split data set into 3 separate BED files by family "
#('AA' 'BRANG' 'BRAH') do: [ : famName |
BioPLINK2Wrapper new
file: baseDir , 'Bos1_REF-FINAL-' , famName , '_Atsm_GINO-chr23_MHC';
out: baseDir , 'Bos1_REF-FINAL-' , famName , '_Atsm_GINO-chr23_MHC';
keepFam: famName;
execute ].

" Run ShapeIt with Ne parameters "
#('AA' -> 100 . 'BRANG' -> 106 . 'BRAH' -> 159) do: [ : assoc |
BioShapeIt2WrapperR644 new
inputBinarized: baseDir , 'Bos1_REF-FINAL-' , assoc key , '_r1';
outputMax: baseDir , 'Bos1_REF-FINAL-AA_r1';
effectiveSize: assoc value;
execute ].

" Generate .GENO file for LAMP-LD "
BioLAMPLDGenotypeFormatter new
bimFilePath: baseDir , 'Bos1_REF-FINAL-BRANG_Atsm_GINO-chr23_MHC.bim';
pedFilePath: baseDir , 'Bos1_REF-FINAL-BRANG_Atsm_GINO-chr23_MHC.ped';
outputFilename: baseDir , 'Bos1_REF-FINAL-BRANG_Atsm_GINO-chr23_MHC.geno';

" Extract positions to build .POS file for LAMP-LD "
(baseDir , '') asFileReference
cut: #(4)
to: (baseDir , 'Bos1_REF-FINAL-Atsm_GINO-chr23_MHC.pos') asFileReference writeStream.

" Transpose HAPS files "
" ... to be included "

" Run LAMP-LD "
shapeItSamplesDir := 'ShapeItFiles' asFileReference.
BioLAMPLDWrapper new
windowSize: 100;
nrFounders: 2;
positionsFile: (shapeItSamplesDir / 'Bos1_REF-FINAL-Atsm_GINO-chr23_MHC.pos') fullName;
addPopFile: (shapeItSamplesDir / 'Bos1_REF-FINAL-AA_Atsm_GINO-chr23_MHC-ShapeIt2-Ne100.haps.ref') fullName atOrder: 1;
addPopFile: (shapeItSamplesDir / 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC-ShapeIt2-Ne150.haps.ref') fullName atOrder: 2;
genosFile: (shapeItSamplesDir / 'Bos1_REF-FINAL-BRANG_Atsm_GINO-chr23_MHC.geno') fullName;
outputFile: baseDir , 'lamp-ld_example1.out';

" Convert positions to mbases "
BioLAMPLDWrapper new
positionsFile: baseDir , 'Bos1_REF-FINAL-Atsm_GINO-chr23_MHC.pos';

BioLAMPLD2WayVisualizer new
lineWidth: 2;
posFile: baseDir , 'Bos1_REF-FINAL-Atsm_GINO-chr23_MHC_1000.pos';
population1Name: 'Angus' color: Color red;
population2Name: 'Brahman' color: Color blue;
readExpanded: baseDir , 'postlampld_ws-100.txt' title: 'Angus versus Brahman';
addGenomicRangeBelowXFrom: 27862 to: 27989 label: 'C I';
addGenomicRangeBelowXFrom: 25350 to: 25592 label: 'C IIa';
addGenomicRangeBelowXFrom: 7011 to: 7534 label: 'C IIb';
addGenomicRangeBelowXFrom: 26973 to: 27576 label: 'C III';

The resulting image can be viewed below:

Depending on your version of ShapeIt, if you have few samples for a population shapeIt v2 won't run with the following error message:

src/modes/phaser/phaser_algorithm.cpp:147: void phaser::phaseSegment(int): Assertion `conditional_index[segment].size() > 5' failed.

Of course you should get more samples, but for continuing testing the workflow you could just duplicate lines and change the ID's of the duplicated samples. This is how to do it:

" First, you split data set into 3 separate PED files by family, but recode the file as textual format (PED) "
#('AA' 'BRANG' 'BRAH') do: [ : famName |
BioPLINK2Wrapper new
bfile: baseDir , 'Bos1_REF-FINAL-AA-BRAH-BRANG_Atsm_GINO-chr23_MHC';
out: baseDir , 'Bos1_REF-FINAL-' , famName , '_Atsm_GINO-chr23_MHC';
keepFam: famName;
execute ].

" Create a new temporary file "
(baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.ped') asFileReference
copyTo: (baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.nwped') asFileReference.

" Duplicate BRAH lines with different IDs "
(FileStream fileNamed: baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.nwped')
ifNotNil: [ : stream |
stream setToEnd.
(baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.ped') asFileReference readStream contents linesDo: [ : line |
| tkl newId |
tkl := line findTokens: Character tab.
newId := 'A' , tkl second.
tkl at: 2 put: newId.
nextPutAll: (tkl joinUsing: Character tab);
nextPutTerminator ] ].

" Rename again to the original file "
(baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.nwped') asFileReference renameTo: baseDir , 'Bos1_REF-FINAL-BRAH_Atsm_GINO-chr23_MHC.ped'.

July 24, 2017

Pharo Weekly - Pharo 6.1 (summer) released!

We are releasing Pharo 6.1.
Usually, between each major version we just apply bugfixes changing the build number and not announcing new versions but this time is different since the fixes applied required a new VM.
The principal reason for the new version is to update Iceberg support, bringing it to macOS 64bits version.
So, now Pharo 6.1 comes with Iceberg 0.5.5, which includes:
– running on macOS 64bits
– adds cherry pick
– adds major improvements on performance for big repositories
– adds pull request review plugin
– repositories browser: group branches by remote
– adds bitbucket and gitlab to recognised providers on metacello integration
– uses libgit v0.25.1 as backend
– several bugfixes
Other important change:
– linux vm by default is now vm threaded heartbeat.
We still miss 64bits Windows (sorry for that), but we are getting there. I hope to have it running right after ESUG.
To download 6.1 version, you can go to page, or with zeroconf:

Craig Latta - Livecoding other tabs with the Chrome Remote Debugging Protocol

Chrome Debugging Protocol

We’ve seen how to use Caffeine to livecode the webpage in which we’re running. With its support for the Chrome Remote Debugging Protocol (CRDP), we can also use it to livecode every other page loaded in the web browser.

Some Help From the Inside

To make this work, we need to coordinate with the Chrome runtime engine. For CRDP, there are two ways of doing this. One is to communicate using a WebSocket connection; I wrote about this last year. This is useful when the CRDP client and target pages are running in two different web browsers (possibly on two different machines), but with the downside of starting the target web browser in a special way (so that it starts a conventional webserver).

The other way, possible when both the CRDP client and target pages are in the same web browser, is to use a Chrome extension. The extension can communicate with the client page over an internal port object, created by the chrome.runtime API, and expose the CRDP APIs. The web browser need not be started in a special way, it just needs to have the extension installed. I’ve published a Caffeine Helper extension, available on the Chrome Web Store. Once installed, the extension coordinates communication between Caffeine and the CRDP.

Attaching to a Tab

In Caffeine, we create a connection to the extension by creating an instance of CaffeineExtension:

CaffeineExtension new inspect

As far as Chrome is concerned, Caffeine is now a debugger, just like the built-in DevTools. (In fact, the DevTools do what they do by using the very same CRDP APIs; they’re just another JavaScript application, like Caffeine is.) Let’s open a webpage in another tab, for us to manipulate. The Google homepage makes for a familiar example. We can attach to it, from the inspector we just opened, by evaluating:

self attachToTabWithTitle: 'Google'

Changing Feelings

Now let’s change something on the page. We’ll change the text of the “I’m Feeling Lucky” button. We can get a reference to it with:

tabs onlyOne find: 'Feeling'

When we attached to the tab, the tabs instance variable of our CaffeineExtension object got an instance of ChromeTab added to it. ChromeTabs provide a unified message interface to all the CRDP APIs, also known as domains. The DOM domain has a search function, which we can use to find the “I’m Feeling Lucky” button. The CaffeineExtension>>find: method which uses that function answers a collection of search results objects. Each search result object is a proxy for a JavaScript DOM object in the Google page, an instance of the ChromeRemoteObject class.

In the picture above, you can see an inspector on a ChromeRemoteObject corresponding to the “I’m Feeling Lucky” button, an HTMLInputElement DOM object. Like the JSObjectProxies we use to communicate with JavaScript objects in our own page, ChromeRemoteObjects support normal Smalltalk messaging, making the JavaScript DOM objects in our attached page seem like local Smalltalk objects. We only need to know which messages to send. In this case, we send the messages of HTMLInputElement.

As with the JavaScript objects of our own page, instead of having to look up external documentation for messages, we can use subclasses of JSObject to document them. In this case, we can use an instance of the JSObject subclass HTMLInputElement. Its proxy instance variable will be our ChromeRemoteObject instead of a JSObjectProxy.

For the first message to our remote HTMLInputElement, we’ll change the button label text, by changing the element’s value property:

self at: #value put: 'I''m Feeling Happy'

The Potential for Dynamic Web Development

The change we made happens immediately, just as if we had done it from the Chrome DevTools console. We’re taking advantage of JavaScript’s inherent livecoding nature, from an environment which can be much more comfortable and powerful than DevTools. The form of web applications need not be static files, although that’s a convenient intermediate form for webservers to deliver. With generalized messaging connectivity to the DOM of every page in a web browser, and with other web browsers, we have a far more powerful editing medium. Web applications are dynamic media when people are using them, and they can be that way when we develop them, too.

What shall we do next?


July 23, 2017

Pharo Weekly - System Monitoring Images & Nagios

I just made it nagios compatible. I developed it because I’m using munin [1]. You can look at this [2] blog post how to do it. If you have questions just ask.

July 21, 2017

Pharo Weekly - API

I made a API tool here:!/~pdebruic/SegmentIO

and they take the logging events and can send it to any of these 200+ tools:

Last time I checked it was working but its been a while.  Unless they’ve
changed things dramatically it should work.


July 20, 2017

Torsten Bergmann - Sista Open Alpha

The Cog VM already made a huge difference in performance for the OpenSmalltalk VM shared by Squeak, Pharo, Cuis and Newspeak. But now Sista - the optimizing JIT is getting open alpha and it looks good to increase performance even more. Read here.

UK Smalltalk - UK Smalltalk User Group Meeting - Monday, 31st July

The next meeting of the UK Smalltalk User Group will be on Monday, July 31st.

We'll meet at a new venue, the City Pride, from 7pm onwards.

If you'd like to join us, you can show up at the pub. You can also sign up in advance on the meeting's Meetup page.