Planet Smalltalk

July 20, 2017

Torsten Bergmann - Sista Open Alpha

The Cog VM already made a huge difference in performance for the OpenSmalltalk VM shared by Squeak, Pharo, Cuis and Newspeak. But now Sista - the optimizing JIT is getting open alpha and it looks good to increase performance even more. Read here.

UK Smalltalk - UK Smalltalk User Group Meeting - Monday, 31st July

The next meeting of the UK Smalltalk User Group will be on Monday, July 31st.

We'll meet at a new venue, the City Pride, from 7pm onwards.

If you'd like to join us, you can show up at the pub. You can also sign up in advance on the meeting's Meetup page.

July 19, 2017

Cincom Smalltalk - Smalltalk Digest: July Edition

The July Edition of the Cincom Smalltalk Digest.

The post Smalltalk Digest: July Edition appeared first on Cincom Smalltalk.

Cincom Smalltalk - Other Interesting #ESUG17 Talks

Title: Seaside-Based Custom ERP System When: Tuesday, September 5, 11:30 a.m. – 12:00 p.m. Name: Bob Nemec Type: Talk Abstract: In this session, Bob Nemec will discuss TRAX, which is […]

The post Other Interesting #ESUG17 Talks appeared first on Cincom Smalltalk.

Cincom Smalltalk - ESUG17: Security Enhancements in Cincom® VisualWorks® 8.3

Title: Security Enhancements in Cincom® VisualWorks® 8.3 When: Wednesday, September 6, 2:00 p.m. – 2:30 p.m. Name: Jerry Kott Type: Talk Abstract: The upcoming release, Cincom VisualWorks 8.3, includes several […]

The post ESUG17: Security Enhancements in Cincom® VisualWorks® 8.3 appeared first on Cincom Smalltalk.

Cincom Smalltalk - ESUG17: HTTP/2 in the Cincom Smalltalk™ SiouX Server

Title:  HTTP/2 in the Cincom Smalltalk™ SiouX Server When: Wednesday, September 6, 11:30 a.m. – 12:00 p.m. Name: Jerry Kott Type: Talk Abstract: In this presentation, Jerry Kott, Senior Software […]

The post ESUG17: HTTP/2 in the Cincom Smalltalk™ SiouX Server appeared first on Cincom Smalltalk.

Cincom Smalltalk - ESUG17: AppeX and JavaScript Support Enhancements in Cincom Smalltalk™

Title: AppeX and JavaScript Support Enhancements in Cincom Smalltalk™ When: Tuesday, September 5, 2:30 p.m. – 3:00 p.m. Name: Vladimir Degen Type: Talk Abstract: In this presentation, Vladimir Degen, Senior […]

The post ESUG17: AppeX and JavaScript Support Enhancements in Cincom Smalltalk™ appeared first on Cincom Smalltalk.

Cincom Smalltalk - ESUG17: Cincom Smalltalk™ Roadmap 2017

Title: Cincom Smalltalk™ Roadmap 2017 When: Tuesday, September 5, 9:00 a.m. – 9:45 a.m. Name: Arden Thomas Type: Talk Abstract: In this presentation, Arden Thomas, the Product Manager for Cincom […]

The post ESUG17: Cincom Smalltalk™ Roadmap 2017 appeared first on Cincom Smalltalk.

Tom Koschate - Ubuntu 64-bit and Cincom Smalltalk 32-bit

You’ll need to issue these commands:

sudo apt-get install libc6:i386
sudo apt-get install libx11-6:i386
sudo apt-get install zlib1g:i386


Pharo Weekly - Free Ephemeric Cloud for Members

Pharo cloud… is now available for free for Pharo association members.

https://association.pharo.org/news/4897928


Pharo Weekly - Sista: the Optimizing JIT for Pharo getting open-alpha

Another great blog post from Clement Bera one of the main architect of the forth coming optimising JIT for Pharo

https://clementbera.wordpress.com/2017/07/19/sista-open-alpha-release/

Stef


Clément Béra - Sista: open alpha release

Hi everyone,

It is now time to make an open alpha release of the Sista VM. As all alpha releases, it is reserved for VM developers (the release is not relevant for non VM developers, and clearly no one should deploy any application on it yet). Last year we had a closed apha release with a couple people involved such as Tim Felgentreff who added support for the Sista VM builds in the Squeak speed center after tuning the optimisation settings.

The main goal of the Sista VM is to add adaptive optimisations such as speculative inlining in Cog’s JIT compiler using type information present in the inline caches. Such optimisations both improve Cog’s performance and allow developers to write easy-to-read code over fast-to-execute code without performance overhead (typically, #do: is the same performance as #to:do:).

Benchmarks

As shown in the following figure generated from the Squeak speed center data, benchmarks are typically in-between 1.5x and 5x faster on the Sista VM than the current production VM. On the figure, the time to run the bench is represented (hence, smaller columns implies less time spent in the bench and faster VM). Four columns are shown for each benchmark:

  • Cog: the current production VM.
  • Cog+Counters: the current production VM with the overhead of profiling counters used to detect hot spot and provide basic block usage profiling.
  • Sista (Cold): the Sista runtime from an image with no optimised code, approximate start-up performance.
  • Sista (Warm): the Sista runtime started on an image with optimised code present, approximate peak performance.

BenchSista2017

The image is extracted from my Ph.D, where one can find all the relevant data to reproduce the benchmarks.

In practice, on real-application benchmarks (such as the TCAP benchmark not shown in the figure), the Sista runtime is around 1.5x times faster. Specific smaller benchmarks sometimes show more significant speed-ups (JSON parsing, bench (c) in the figure, showing 5x), or no speed-up at all (Mandelbrot, bench (i) in the figure, the time is spent in double floating-pointer arithmetic and I did not implement double optimisations in Sista).

Optimisations

For this first release, the main focus has been on closure inlining and to get some decent benchmark results to get people looking for an efficient Smalltalk interested.

Several optimisations (String comparisons, inlined object allocations, unchecked array accesses) show a 1.5x times speed-up on benchmarks where those operations are intensive, but based on profiling on larger application (for example the Pharo IDE) the speed-up comes mainly from closure inlining.

Some benchmarks in the benchmark suite focus on other things such as 32bits large integers or double floating pointers arithmetic. These benchmarks typically used inlined loops (#to:do:, etc.) and hence don’t really benefit much from the runtime compiler.

Naming convention

Just a little discussion on the names not to confuse everyone…

Sista is the name of the overall infrastructure/runtime.

Scorch is the bytecode to bytecode optimising JIT, written in Smalltalk. It relies on Cogit as a back-end to generate machine code. It can ask Cogit for specific things such as the inline cache data of a specific method.

Cogit is the bytecode the machine code JIT compiler. It is used alone as a baseline JIT and can be combined with Scorch to be used as an optimising JIT.

The following figure summarises the Sista architecture and the interactions between the frameworks:
Screen Shot 2017-07-17 at 11.05.38.png

Overview of the runtime compiler Scorch

Scorch is called by Cogit on a context with a tripping counter (i.e., a portion of code executed many times). The optimiser then:

  1. Select a context to optimise (always a method context)
  2. Decompiles the context method to a SSA IR.
  3. Performs a set of optimisations
  4. Generates back an optimised compiled method
  5. Installs the optimised method and register its dependencies

In step 1, Scorch looks for the context defining closures on stack. The typical case is that the Array>>#do: method has a tripping counter. Optimising Array>>#do: won’t make any sense if the optimiser cannot optimise the closure evaluated at each iteration of the loop. The optimiser typically selects the sender context of Array>>#do: for optimisation, so later in the optimisation process the closure creation and evaluation will be removed.

In step 2, Scorch generates a control flow graph of basic blocks, each basic block having a linear sequence of instructions. This step includes heavy canonicalisation and annotation of the representation (basicBlocks are sorted in reverse postOrder, annotated with the dominator tree, loops are canonicalised, sends are annotated with runtime information from Cogit, the minimum set of phis is computed, etc.).

In step 3, Scorch performs a set of optimisations on the control flow graph. Multiple inlining phases happen, where the goal is to inline code in nested loops, to inline short methods and to inline code that would lead to constant folding or closure inlining. Part of the inlining phase consists in removing temp vectors (once the closures are inlined). Aside from inlining, one optimisation phase focuses on loops, hoisting code out and in rare cases unrolling them. The other phases consist of dead branches removal, better SmallInteger comparisons / branches pipeline, redundant type-check removal, common sub-expression elimination, unused side-effect free instruction elimination, head read/write redundancy elimination and other minor things like that.

In step 4, Scorch makes small changes to get the representation in a proper state for code generation (some instructions are expanded, the single return point is split in multiple ones, etc.). It then analyses the representation to figure out which value will become a temporary variable and which value will become spilled on stack. Future temporaries are then assigned a temp index. The temp index is assigned first by coalescing phis (to decrease temp writes) and second through graph coloring (to use the least number of temps). Once done, the representation is traversed generating bytecodes for each basic block. The size of each jump is then computed, and the final optimised method is generated.

In step 5, Scorch installs the optimised method, potentially in a subclass of the original method (customisation). The optimised method has a special literal which includes all the deoptimisation metadata to reconstruct the runtime stack with non optimised code at each interrupt point. In addition, Scorch adds to the dependency manager a list of selectors which requires the optimised method to be discarded if a new method with this selector is installed (look-up results could change, confusing speculative inlining, etc.).

Next optimisations to implement

Apart from rethinking the optimisation planning and improving all the existing optimisations, new optimisations may be added. On the top of my head, the major next things I can think of are probably:

  • Full object sinking: Right now read/writes to objects are analysed and simplified, but there is nothing entirely removing object allocations if they don’t escape the method. That was not done because almost every object escape, but if one would direct the inliner to remove escapes or do partial escape analysis, I guess we could see significant speed-up.
  • Something for large integers arithmetic. The minimum would be to be able to permute the arithmetic operations using associative/commutative properties, solve the ones between constants and hoist others out of loops.
  • Float boxing / unboxing management. That would remove a lot of overhead in float intensive benchmarks.

There are also multiple minor things to do here and there. Improving loop optimisations would likely yield significant speed-up too.

How to get/build a Sista image and VM

1) Get the Pharo 6 release image and VM, for example by doing in the command line:
wget --quiet -O - get.pharo.org/60+vm | bash

2) Execute the following code (DoIt) to prepare the image:

"Add special selector for trap instruction"
Smalltalk specialObjectsArray at: 60 put: #trapTripped.
"Disable hot spot detection (to load the Scorch code)"
Smalltalk specialObjectsArray at: 59 put: nil.
"Recompile the fetch mourner primitive which has strange side-effect with alternate bytecode set and closures"
WeakArray class compile: 'primitiveFetchMourner ^ nil' classified: #patch.
"Enable FullBlockClosure and alternate bytecode set"
CompilationContext bytecodeBackend: OpalEncoderForSistaV1.
CompilationContext usesFullBlockClosure: true.
OpalCompiler recompileAll.

3) Add in Monticello the repo http://smalltalkhub.com/mc/ClementBera/Scorch/main,
load ConfigurationOfScorch, and execute the DoIt:
ConfigurationOfScorch load

4) Go to https://github.com/OpenSmalltalk/opensmalltalk-vm
and compile a squeak.sista.spur VM.

5) Restart your image with the Sista VM. You can now execute:

"Opening Transcript"
Transcript open.
"Reference value"
25 tinyBenchmarks logCr.
"Enable Scorch optimizations"
Smalltalk specialObjectsArray at: 59 put: #conditionalBranchCounterTrippedOn:.
"Optimised value"
25 tinyBenchmarks logCr.
"Disable Scorch optimizations"
Smalltalk specialObjectsArray at: 59 put: nil.

It should show on Transcript something like that (copied from my machine):

'2486945962 bytecodes/sec; 150270417 sends/sec'
Counter tripped in Integer>>#benchmark
Installed SmallInteger>>#tinyBenchmarks in SmallInteger
Counter tripped in Integer>>#benchmark
Installed SmallInteger>>#benchmark in SmallInteger
Counter tripped in Integer>>#benchFib
Installed SmallInteger>>#benchFib in SmallInteger
'3849624060 bytecodes/sec; 271220541 sends/sec'

That code was run on the Sista runtime.

6) Optionally, add in Monticello the repo http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner and load the 2 packages to have a set of benchmarks to toy with.

Note when toying around

If you want to experiment with the Sista runtime, you need to note:

  • A certain number of iterations is needed to reach peak performance.
  • DoIts are not optimised (hence if you don’t put your code in a method, it won’t get optimised).
  • It’s still not very mature, hence it is possible to build benchmarks where the performance is not that good.
  • Expect crashes.

Another interesting thing is to do:
optimisedMethod metadata printDebugInfo
which shows [most of] inlined code in the given optimised method, and allows one to try to understand the optimiser optimisation decisions. In the case of tinyBenchmark, the method benchmark would show something like (based on my machine):

benchmark
   52) atAllPut: Inlined (SequenceableCollection>>#atAllPut:) [0]
     41) from:to:put: Inlined (SequenceableCollection>>#from:to:put:) [1]
       56) min: Inlined (Magnitude>>#min:) [2]

The number at the beginning (52) is the bytecode offset of the send inlined, followed by the selector of the send (atAllPut:), followed by the method inlined (SequenceableCollection>>#atAllPut:). In some cases there may be several methods inlined. The last number ([0]) is the order in which the methods are inlined.

The indentation means the inlining depth (#from:to:put: is inlined in #atAllPut: itself inlined in #benchmark for example).

In the case of benchmark, other methods were inlined, but they were proven to be non-failing primitives, so they are not shown here.

In the case of non-local return inlining, more complex logic is involved and the debug info may be incomplete.

System integration: Some TODOs

Many things are partially done in the IDE. Customised methods are currently shown in the class browser. It is possible at each interrupt point to deoptimise an optimised context to multiple deoptimised contexts, but the debugger ode needs to be updated to do so. Hooks for method installation need to be added to correctly ask Scorch to discard optimised methods that are dependent on the selector.

Another thing is the thisContext keyword, which now shows sometimes optimised context. Again, on interrupt point, it is possible to request deoptimisation, but no IDE tool is doing so right now.

Lastly, the deoptimiser is written in Pharo, but is completely independent from the rest of the code and needs love. Some parts have still dependencies, leading to crashes.

I hope you enjoyed the post. Please report on the vm-dev mailing list any experiment with the Sista VM.

 

 

 

 


Hernán Morales Durand - Iliad version 0.9.6 released

Lately I have been playing with the Iliad Web Framework, and decided to publish some updates which I want to share with you:A new web site based in GitHub pages, with install instructions, screenshots and links to pre-loaded images and documentation. Updated Iliad to load in Pharo 6.0 Added an Iliad Control Panel, based in the Seaside one, which allows to create/inspect/remove web server adapters

July 18, 2017

Pharo Weekly - Keccak-256 hashing algorithm

Hi there!

I am just releasing the first version of the Keccak-256 hashing algorithm. https://en.wikipedia.org/wiki/SHA-3
This  version is based on a javascript implementation: https://github.com/emn178/js-sha3
This implementation supports  as message: bytearray and ascii and utf-8 strings.
Soon i will be adding support to the rest of the Keccak family of hashing functions, since the implementations is quite configurable, is just need to add some constructors with specific configurations and tests for this other cases of usage.
Here a onliner for building an image with the version v0.1:

Torsten Bergmann - RedditSt20

Pierce extended Sven's excellent "Reddit.st in 10 elegant classes" with even more. Read more.

July 17, 2017

Cincom Smalltalk - Make Plans Now for #Smalltalks2017

Continuing with a great tradition, FAST is organizing Smalltalks—the free conference on Smalltalk-based technologies, research and industry applications. Due to some restrictions of different concerts in La Plata, we have moved […]

The post Make Plans Now for #Smalltalks2017 appeared first on Cincom Smalltalk.

July 16, 2017

Pierce Ng - RedditSt20

I have started a booklet on Pharo, hopefully the first of, um, more than one. It is entitled RedditSt20, on my fork and extension of Sven Van Caekenberghe's excellent "Reddit.st in 10 elegant classes", to cover the following in another 10 or so classes:

  • GlorpSQLite
  • Seaside-MDL
  • username/password authentication
  • logging
  • 2-factor authentication

The book is hosted on Github. Source code is on Smalltalkhub.

The book is being written using Pillar, of course. Note that the Pharo 5 version of Pillar that I downloaded from InriaCI doesn't work - the supporting makefiles aren't able to obtain the output of "./pillar introspect <something>". Use the Pharo 6 version.

July 15, 2017

Torsten Bergmann - PharoLambda

PharoLambda is a simple example and GitLab build script for deploying a minimal Pharo Smalltalk image/vm to AWS Lambda.

Smalltalk Jobs - Smalltalk Jobs – 7/14/17

  • Miami, FLGemstone Lead through E-solutions Inc
    • Required Skills:
      • 7 years hands-On experience in Gemstone Database Administration
      • Good in application support, technical solution, implementation of business requirements and enhancements
      • Good development experience with VisualWorks 7.9 Smalltalk development
      • Should have excellent knowledge on the OOPS Concepts.
      • Should work independently in SmallTalk technology
      • Good working experience in Multi-vendor environment and client facing role
  • Wilmington, MASoftware Engineer II at Rudolph Technologies, Inc.
    • Required Skills:
      • Bachelors or Master’s Degree in Software Engineering, Electrical Engineering, or comparable field
      • 2 – 5 years of experience
      • Object Oriented Programming skills
      • An interest in electronics, servo systems, optics and/or image processing
      • Knowledge of embedded system development environments such as RTEMS, VxWorks or similar environment
      • Outstanding problem solving skills
      • Serviceable written and verbal communication skills
      • A strong desire for technical challenge
    • Wanted Skills:
      • Smalltalk
      • C++
      • Python
      • Control Systems (servo, stepper, robotics) course work and preferably lab experience. Academic setting is acceptable.
      • A knowledge of optics, sensor technologies or physics.
      • Image processing using a popular image processing toolkit such as Halcon, MIL, Cognex, IPP, or OpenCV
      • A working knowledge of modern SW engineering process methodologies such as SDLC, Agile, etc.
      • Knowledge of SQL for a popular DB like PostgreSQL, Oracle, or SQL Server
Good luck with your job hunting,
James T. Savidge

View James T. Savidge's profile on LinkedIn

This blog’s RSS Feed


Filed under: Employment Tagged: jobs, Smalltalk, Smalltalk jobs

July 14, 2017

Torsten Bergmann - Teapot: Web Programming Made Easy

Nice article on how to write a web application with Pharos Teapot framework.

Torsten Bergmann - Iceberg 0.5

A new release of Iceberg for Pharo is available to work with Git.

Torsten Bergmann - CORA

An add in for Pharos Quality assistant. Read more

Torsten Bergmann - Quuve

Debris Publishing have a new version of Quuve - an investment management platform written in Pharo and Seaside#. It is another success story and another example of "things people built with Smalltalk". They use my Twitter Bootstrap for Seaside project. Reminds me that I wanted to updated the project if my spare time permits. Full video demo is here.

July 11, 2017

Tom Koschate - The Magic is Back

In the interest of getting on with life, I succumbed and created a magic build image (actually there are four, but they’re the same).  So the build is now happening with Jenkins and I’ve moved to other matters for now.


July 08, 2017

Pharo Weekly - [ANN] Iceberg 0.5 released

Hi all,
I’m releasing 0.5 version of iceberg.
This is the changelog:
Major changes:
—-
– works on 64bits
– adds cheery-pick
Other:
—-
This version also includes a list of fixes, most important one is this:
– branchs are kept inline with local working copy (so if you change a branch in command line or in another image it will indicate it correctly)
But there are many others, next version will have a full list, I promise 🙂
Now, to actually use it you will need to accomplish several steps (until I update the image)
1) You need to download the new stable VM for P7 (it does not matters if you are on P6).
Zeroconf:
64bits:
wget -O- get.pharo.org/64/vm70 | bash
wget -O- get.pharo.org/64/vmT70 | bash #If you are on linux
32bits:
wget -O- get.pharo.org/vm70 | bash
wget -O- get.pharo.org/vmT70 | bash #If you are on linux
then, to update, execute this (sorry, this is like that because we have still an older Metacello version):
#(‘Iceberg-UI’ ‘Iceberg-Plugin’ ‘Iceberg-Metacello-Integration’ ‘Iceberg-Libgit’ ‘Iceberg’ ‘BaselineOfIceberg’ ‘LibGit-Core’ ‘BaselineOfLibGit’)
do: [ :each | each asPackage removeFromSystem ].
Metacello new
  baseline: ‘Iceberg’;
  repository: ‘github://pharo-vcs/iceberg‘;
  load.
There will be a version of 6.1 that provide Iceberg 0.5 but it requires different version of C plugins hence a different VM.

July 07, 2017

Pharo Weekly - News from PR battle front

I prepared a script that should help you with the reviews of the pull requests on Pharo 7. We will later convert it into a more fancy tool. It does next steps:
– sets the basic information: pull request number, path to your pharo repository clone, name of your fork.
– registers the repository into Iceberg and sets pull and push target remotes
– switches branch to a particular commit from which the Pharo image was bootstrapped
– registers the repository into into Monticello packages to be able to do correct diffs
– gets basic information about the pull request from GitHub (original repository, branch name)
– registers the PR original repository into remotes if needed and fetches information from it
– creates a new local branch to merge the PR
– merges the PR branch
– displays a simple tool that shows differences in done in this merged branch
——–
pullRequest := 73.
target := ‘/path/pharo’ asFileReference.
myForkName := ‘myFork’.
repository := IceRepositoryCreator new location: target; subdirectory:’src’; createRepository.
repository register.
fork := repository remotes detect: [ :remote | remote remoteName = myForkName ].
repository pushRemote: fork.
repository pullRemote: repository origin.
repository checkoutBranch: (SystemVersion current commitHash).
fileTreeRepository := (MCFileTreeRepository new directory: target / #src; yourself).
repositoryGroup := MCRepositoryGroup withRepositories: { fileTreeRepository. MCCacheRepository uniqueInstance. }.
MCWorkingCopy allManagers
select: [ :wc | (wc repositoryGroup repositories reject: [ :repo | repo isCache ]) isEmpty ]
thenDo: [ :wc | wc repositoryGroup: repositoryGroup ].
stonString := (ZnEasy get: ‘https://api.github.com/repos/pharo-project/pharo/pulls/‘, pullRequest asString) contents.
head := (STONJSON fromString: stonString) at: ‘head’.
sshUrl := (head at: #repo) at: ‘ssh_url’.
branchName := head at: #ref.
user := (sshUrl withoutPrefix: ‘git@github.com:’) withoutSuffix: ‘/pharo.git’.
fork := repository remotes detect: [ :remote | remote remoteName = user ] ifNone: [
| newFork |
newFork := (IceRemote name: user url: (‘git@github.com:{1}/pharo.git’ format: {user})).
repository addRemote: newFork.
newFork ].
repository fetchFrom: fork.
prMergedBranchName := ‘pr’, pullRequest asString.
repository createBranch: prMergedBranchName.
repository checkoutBranch: prMergedBranchName.
commit := repository revparse: user, ‘/’, branchName.
bootstrapCommit := repository revparse: (SystemVersion current commitHash).
[ repository backend merge: commit id ]
on: IceMergeAborted
do: [ :error | repository mergeConflictsWith: commit   ] .
headCommit := repository revparse: ‘HEAD’.
browser := GLMTabulator new.
browser row: [:row | row column: #commits span: 2; column: #changes span: 3]; row: #diff.
browser transmit to: #commits.
browser transmit to: #changes; from: #commits; andShow: [ :a :commitInfo |
(IceDiffChangeTreeBuilder new entity: commitInfo; diff: (IceDiff from: commitInfo to: bootstrapCommit); buildOn: a) title: ‘Changes’. ].
browser transmit from: #commits; from: #changes;  to: #diff; andShow: [ :a |
a diff title: ‘Left: working copy / Right: incoming updates’; display: [ :commitInfo :change |
{ change theirVersion ifNil: ”. change myVersion ifNil: ”. }]].
browser openOn: {headCommit}.
——–
The merge operation only changes the Git working copy, no code is loaded into the image. If you want to test the PR code, currently you need to open Iceberg and reload all packages in the Pharo repository (Packages tab, Reload all)
Expect troubles 🙂
Cheers,
— Pavel

July 06, 2017

Pharo Weekly - Iceberg 0.5.1 with Pull Request review tool

I just released Iceberg version 0.5.1 with a Pull Request tool Guille and I worked on since yesterday.
It allows you to list open Pull Requests (by right click on a repo, GitHub/Review pull requests… option):
unnamed.png
And then if you doubleclick on one (or select it with right button), you will see this:
unnamed-1.png
it allows you to see changes and
– merge changes into your image (in case you want to see more in details the code, run tests, etc.)
– accept a pull request
– reject a pull request
no, it does not shows (at least *yet*) comments, and it does not allows you to add comments, reviews, etc.
this could be done, but not time to implement it now, so for now this has to be enough.
Again, this can be loaded in a 6.0 image by executing this script:
#(‘Iceberg-UI’ ‘Iceberg-Plugin’ ‘Iceberg-Metacello-Integration’ ‘Iceberg-Libgit’ ‘Iceberg’ ‘BaselineOfIceberg’ ‘LibGit-Core’ ‘BaselineOfLibGit’) do: [ :each | each asPackage removeFromSystem ].
Metacello new
  baseline: ‘Iceberg’;
  repository: ‘github://pharo-vcs/iceberg:v0.5.1‘;
  load.
(and you still need to have the vm that is meant for Pharo7)
This tools are open for you to use on your projects… and to improve them, I accept pull requests on pharo-vcs/iceberg.
cheers!