I have been born in 1966 in Vigevano and I have always been living
here except for a few nice years during high school
when I was in Intra (on Lago maggiore) and for about one year
when I was living and working in California.
I also have a few social accounts/profiles, but I'm not very active
on any of them.
If you want to contact me for any reason feel free to send a
mail to x@y.z.
In case you are a recruiter I'm not offended at all (why
should someone be?) but please consider that I am not
currently looking for a new job.
Except for spam or spam-looking messages I normally reply to
personal emails, so if I don't please consider that your
email could have been lost somewhere. It may also take some
time for me to reply (even days).
I am a computer programmer.
I have been programming since 12, starting with programmable
calculators (a TI-57 and an HP that my brother got as a
bargain because the seller thought it was broken when it was
instead RPN based).
The first "real computer" I worked on was an Apple ][, 8
bit, 16Kb RAM and 1MHz clock (the computer I'm typing this
text now is 64 bit, 16Gb RAM and 3GHz clock x 4 CPUs, in
other words is roughly
8×1,000,000×3,000×4=96,000,000,000
i.e. ninety-six billions times better, not counting
the storage - magnetic tape cassette instead of a SSD).
I program in many computer languages.
Actually programming languages is one of my interests and
I have experimented quite a bit in many of them,
including exotic ones, and I also have created mines.
The languages that I currently use most often are
## Python
A very nice language. The first Python code I saw
was in an usenet post on it.comp.lang.c++ and I
thought it was pseudo-code.
It is so nice that this can even be a problem as I
got the impression that sometimes the temptation
is to write overly complex solutions because the
basic solution that one would implement in C++ is
too easy and a programmer's brain is always
looking for a challenge.
The other problem is speed compared to C++, but
this can be mitigated easily by writing C++
modules interfaced using sip, writing cython
modules or even more simply by using PyPy if the
dependencies allow it.
Python is my first choice in many cases and my rule
is roughly "Python if you can, C++ if you need".
## C++
A powerful but complex language. It has many
defects and problems, but still it can get the
job done without compromising too much (for the
end user).
The biggest problem in my opinion is the
"undefined behavior" concept that requires
programmers to never do mistakes. Paired with
the absurd complexity of the language this makes
most complex C++ software a ticking bomb just
waiting to explode.
C++ is not and will never be a high-level
language than can let you forget about its
problems and concentrate on the problem you are
working on. Sometimes it gives you that
illusion, but at the very first segfault you are
reminded that you're just a few millimeters
above the metal.
One quite annoying issue of C++ is that it's hard to experiment
with
I think that the best way to really understand something in
programming is by writing, not by reading. Reading a bit is very
important and all good, but reading too much is not, because
reading only gives you the illusion you understood
something.
Unfortunately, C++ language is in my opinion quite hostile to experimentation
for three main reasons
It's complex
C++ is actually terribly complex. Some of the complexity
is essential (the problem being considered is hard) but sometimes
it's artificial (it's just more complex than it could have been).
It's often illogical
The C++ semantic (and even its syntax) can be really surprising and appears
to be in a few places completely illogical.
There are in my opinion mainly two different reasons for this 1) it's the
output of a committee and 2) it's an old language that changed over time.
The first reason can generate illogical choices is because a committee itself
is a complex entity with complex dynamics. Sometimes the decisions are not
just the most logical decision for the problem but are explainable considering
other aspects like the agendas and views of the different members or even
psychological dynamics in a meeting (e.g. approving a little nonsense by someone
that didn't say anything else in the meeting because it would look bad to
be too hard about it).
The second reason (history and evolution) can generate problems because C++
has been, at least in the past, very very strong about keeping backward compatibility
with both C and previous versions of itself.
This means that when a mistake slips in, and it's not just an obvious "typo" but
a semantic decision that just turned out later to be a bad idea, then it's impossible
to fix it because that would be a change, breaking all existing code that is
depending on that questionable idea. Also over the years even the focus shifted a little
and thus different parts of the language don't seem to be aligned on the same "view"
(but earlier parts cannot be changed because of backward compatibility.
This strong backward compatibility value started to change some years ago, and
for example now there is even a "deprecation" model that makes conceivable to
stop compiling C++ code that was compiling before. Even changes in runtime behavior of
the same code is now acceptable, provided the cases in which the difference is visible
are considered absurd and not common.
There are no runtime errors, but undefined behavior
C++ has always been focused on speed (the idea is to leave no room for a lower-level
language between C++ and the metal... with possibly the exception of a little bit of
assembly). The focus on speed is very important and thus for example doing checks at
runtime that would trigger only for buggy code is considered a bad idea (why slow
down correct code to help bad code? fix the bad code instead!).
For example, accessing the element with index 10 of a ten-element array doesn't trigger
an out of bound error (like it does in Python, Java and most other languages) but just
is declared Undefined Behavior (UB) that means that anything may happen.
This UB concept is problematic and subtle because often "anything" translates in "nothing" and
buggy code simply seems to be working (just to bite back when the damage will be maximal).
Undefined behavior can be extremely surprising, and consider that it can
even
travel back in time.
To sum it up C++ is hard to experiment with because it's complex, logic doesn't always help
and when you make a mistake there is no guaranteed negative feedback.
In such an environment it's not trivial to build a mental model of how things work
and thus, in my opinion, a much saner path starts with a bit of reading.
I'd suggest:
The C++ Programming Language
By Bjarne Stroustrup, the original designer of C++.
Effective *
A series of books by Scott Meyers. Those are very nice also because of the format:
a collection of small mostly self-contained tips and specific cases discussed in detail that
are much easier to "swallow" than a huge discussion about complex parts.
## Javascript
A terrible language but with a fantastic runtime
environment.
Actually the main architecture is ok, but the
details about the syntax and semantic are
terrible and make, in my opinion, the language
unsuitable for complex software.
What is really great is the runtime environment
in an HTML5 browser. The amazing combination of
three badly designed and quirk-riddled languages
like HTML, CSS and Javascript still make a
powerful combination (for the end user).
## Lisp (my own dialect)
Lisp is a real eye-opener. More than a language
is a distilled essence of programming on which
you can build any language you like.
With "Lisp" I mean here the generic Lisp idea with
generic macros without self-inflicted amputations
like the ones of Scheme (template-based macros
only, only recursion for iteration) or of Clojure
(functional approach only).
For reasons that are not completely clear to me
basically no one (numerically speaking) uses Lisp
today. While ignorance (not knowing about it)
may be a reason for the majority, even who knows
very well (Peter Norvig, director of research at
Google, for example is an expert lisper) still is
not using it and this puzzles me.
I write most Lisp code in my own implementation of
Lisp named JsLisp (a Lisp compiler targeting
Javascript).
JsLisp is a compile-only implementation and
this is possible because Javascript provides
eval that allows the creation of new
Javascript code at runtime.
When such a feature is not present a Lisp
compiler needs also to embed a Lisp interpreter
because Lisp macros require the ability of
execute arbitrary Lisp code at compile/read time.
The project is hosted
on github
and some documentation (albeit a bit
outdated) is available
on JsLisp website.
Here you can see a small video showing the JsLisp
IDE (running inside a browser) and some demo
programs written with JsLisp.
## C
I don't use C very often now because I am
currently working on mostly on PCs where resources
are abundant. Still the language is a nice
"portable assembler" with which you can implement
nice efficient algorithms without being annoyed by
registers and other limitations of CPUs.
For quite a long time I have been writing software
using the object-oriented paradigm in C using a
base object as first element of a derived object
for inheritance and using function pointers to
emulate dynamic dispatching.
Now I don't do this anymore and I simply work
with C++ when it makes sense and there are no
resource or interoperability problems.
## Assembler
I started writing code in assembler and, in
retrospect, I think this is a great path to
programming.
Assembler is conceptually simple (or at least it
was simple on Apple ][, with the fantastic 6502
processor from which I took my nickname).
I strongly believe that the human brain is wired
for a concrete-to-abstract path when understanding
and thus it makes sense to start with something
so simple and concrete that even the "function"
or "variable" concept are not natively present.
Once you have a mental model of how things are
working it is easy to build abstractions like
higher-level languages as BASIC. Even things
like a BASIC string are not "magic", but
something you have a rough idea of how are
implemented.
For quite a long time I coded mostly in
assembler only and even the first commercial
program I completed (PaintStar, a pixel oriented
paint program for the Apple ][ family) was
written entirely in assembler.
I have devoted to my memories of 6502 a few
emulators. One is a simple text-mode only
Apple ][ emulator
written long ago in C and assembler and another
is an emulator of the processor only
written
in Javascript (my first and so far only
attempt at writing a JIT compiler - something
slightly more complex for a processor where
self-modifying code is permitted and commonly
used).
After 6502 I worked quite a bit using 68k
processors and x86 in 16/32 bit (including
manually selecting instructions for u/v
pipelines parallel execution in early Pentium
:-D).
I've a much smaller experience in assembler-level
64-bit coding but I'm currently filling this gap
(I'm working on my first 64-bit native-code
compiler for a Lisp dialect that generates
x86-64 machine code directly, without depending
on a C compiler or an assembler).
The calling convention uses rdi for
as a pointer to the argument list
and rsi as the argument
count. Values are all stored in 64-bit
unions tagged using lowest 1-3 bits (one bit
only for floating point values, leaving 63
bits for the numeric value itself) and the
call convention is compatible with a "C"
declaration of Value foo(Value *args,
int count).
The first argument is always the closure
object itself; variadic calls do transform
the arglist parameter into a list object
in the function prologue.
I'm still reworking on the base object
representation but I expect that leaving
parameters out of the call stack will allow
a simpler gc logic still not paying too much
in performance.
So far I'm impressed with the speed that
even for such a naïve implementation is not
far from much more sophisticated JIT
compilers (there is however no gc yet).
There are also programming languages I avoid.
I think that language can shape the thought process, and
this is true also for computer programming languages.
Some languages really allow you to think the previously
unthinkable and provide you with new weapons that can be
used when fighting with real world problems.
Other languages instead cripple your brain and to keep
it confined to predefined schemas. The sad part is that
this, in some cases, is not an accident but the target for
the language: the aim is not enabling programmers to do
great things, but just avoid doing big damages.
While of course big damages are not a good thing the
solution, in my opinion, is more on the education and
practice than in using blunt tools. A bad programmer is
not going to write good code if provided a bad language.
Languages that I prefer to avoid are
## Java (crippled and limited by design).
Long ago, when I was working for Enel, I was
given the opportunity of following a Java
training and I got a Java 1.1 certification.
Even in the course I however got the impression
that the language was full of bureaucracy and
wrinkles designed to constrain.
I did not follow closely the evolution of the
language as I decided back then that I would
try to avoid the language if possible. So far my
strategy has been successful.
## PHP (obscenely crippled because designed by
illiterate).
PHP is bad, so bad that's not even funny. It's
so bad that it's hard to distinguish it from a
caricature of bad programming languages.
Amazingly enough it's a winner on the web: the
worst possible language is probably one of the
most popular. Go figure.
In my opinion the bad parts of PHP (most of it)
are however due to simple ignorance and not to a
deliberate design. Moreover the limitations in the
language are there because the implementers
didn't know how to get certain features in, not
because they didn't want programmers to use
them.
## COBOL (maybe ok in '70s as better than assembler;
inexcusable today).
## FORTRAN (ditto).
## C++ extreme metaprogramming
C++ as a programming language is full of
problems but still reasonable. The
metaprogramming part however is awful.
Exploiting what I think was an unwanted
side-effect of the implementation of C++
templates, after some years people discovered
that the absurdly intricate rules for
overloading resolution and the strange rule
SFINAE ([template] Substitution Failure Is Not
An Error) could be used to make IFs and
recursion-based loops at compile time during
template expansion.
This would have been no problem if many
programmers (not understanding there is a
difference between what can be done and
what should be done) wouldn't have started
using these unwanted and unanticipated
"features" (basically bugs) to try to write real
metaprogramming code.
This amazingly enough ended up in huge
template-based undebuggable libraries that take
forever to compile, that exhaust at compile time
all existing RAM, that give tens of screenfuls
of error message when you make a typo.
Even more amazingly this absurd way of
programming made it into the standard. Compilers
over the years have been fixed to handle "complex"
(i.e. non completely trivial) templates when in
the early days for example recursion limit was a
problem. C++11 now incorporates some of the
monstrosities that template metaprogramming
obsessed guys designed.
Thanks to this absurd choice of chosing
brainf**k to do metaprogramming it's still too
much to ask for a real loop, data structures or
even just enumerating members of a class at
compile time. Non-trivial metaprogramming in C++
is still impossible (and IMO it's not something
that will be fixed in any foreseeable future).
My choice for metaprogramming in C++ is writing
external code generators in Python. Of course
not an ideal solution as there is no compiler
telling you what it knows and you're forced
instead to work at the text level (a nightmare
thanks to C++ very complex grammar), but still
way better than writing metaprograms in such a
sad and poor environment.
It is now several years that I am not using Windows anymore
neither at home nor at work. At home I am currently using a
Linux Arch system and I also have an old Mac mini and an old
iPad on which I experiment a bit on OsX. I only use Windows
to check that software works reasonably on those systems.
I'm considered old and I started behaving as it's customary for old people.
I am now 55 and I noticed that the industry seems thinking
that I am too old to be interesting as a resource (the number of
job offers I get diminished in recent years).
I'm however currently a lot less publicly involved compared to long ago so
maybe this is also playing a key role.
I found recently in myself a sort of automatic repulsion for anything that is new but
I'm consciously trying to compensate that. Maybe it is normal for aging people
to become more reluctant to changes, maybe now a default evaluation of "rubbish" is
the statistically correct point of view.
I never started a serious blog or video channel (only experimented with different
platforms) but none the less here are a few points that I've come to over the years:
Reading is good, reading too much is bad
In coding, reading can give some directions but the only way to learn is by writing a lot of code. If you read too much
you may get the ILLUSION you know a topic, but in reality you are not able to do anything useful in that area.
Ignore non-english material, avoid producing it
There's nothing to gain by splitting knowledge by country. If you've problems with technical English, first fix that.
I was born and I'm currently living in Italy but, in my opinion, it's better bad English than good Italian if we're
talking about programming.
Localizing is focusing on differences and divisions instead of focusing unity and on what is common; this is true in
my opinion for everything, but while localization may be acceptable or even the right thing for non-technical people
(for historical reasons, not as a value per se), I think it's absolutely incomprehensible and inexcusable in STEM.
Once done, do it again
When writing code it's normal to make decisions and later in the construction one may realize that a better approach was
possible. Changing would mean however to restart from scratch and rewriting a lot of existing code. My suggestion is to
still complete and get to "the end" of the project (i.e. if you're writing a chess-playing program get to a program that
maybe is not very strong but can at least play a full game).
When you finish your project, start it over from scratch. This time you have the experience to fix those early mistakes
and get a better one... but it wouldn't be surprising if during the new implementation you get another even better idea
about the basics. My suggestion is to still finish the second version but, after that, start again.
And over. And over. You can move to something else only after a few iterations, not before (if you care about writing
decent stuff).
If it works, you didn't look close enough
In complex software (i.e. not completely trivial) there will be bugs or at least behaviors that were unintended. When
everything works as expected it simply means that you didn't look close enough.
Note that this is not an excuse for leaving bugs in, or for coding without thinking too much because there will be bugs
anyway... just a reminder that the code is most probably not doing what you think is doing. When debugging is also
important to accept that you don't know what the program is doing (if that was the case the bug wouldn't be present) and
carefully measure and inspect everything. It can be really surprising how things could go wildly and badly internally
but can still be apparently (or even effectively) "working as expected" by mere coincidence.
When facing bad behavior and debugging looking for the issue it's not uncommon to find other genuine bugs that are
however unrelated to the observed issue.
Your time is precious, choose wisely what to invest on
When I started IT information was scarce and expensive; I was paying for compilers (not kidding). Today we're in the
opposite situation (free compilers, documentation, even free tiers of cloud computing and storage power are available)
and everyone in coding is flooded with information, tips, techniques, frameworks, libraries, languages, methodologies,
tools... all for free.
Most of that is useless nonsense (90% of everything is crap) so it's important to choose correctly.
I don't think there is an easy way to guess, but a few of rules I gave myself are:
Never invest free time on details of a commercial product
No matter how popular it is, no matter how wonderful it is, a commercial product is in the hand of a single company that
will have its own agenda that most probably will be different from yours. What is good for them will not be what is good
for you.
A marketing department will destroy the product or change it into something else because they think it will be for their
advantage. All your knowledge about the details of that product will vanish instantly and will be pointless.
It's ok to invest time on a commercial product if you're getting some other compensation (e.g. money) but you're not
"growing" by learning it because in a few years all you added to yourself will disappear instantly. It's not about if,
it's only about when.
You're just accumulating future nonsense rubbish.
Never invest free time on details of a fake open source product
Something may be "open source" in theory, but with a firm grasp by a single company or paying entity, and in that case the
situation is not very far from a commercial product. While IN THEORY if the "owner" steers development in a direction
you or the community doesn't like, you or the community could simply fork the project... this in practice is often
unfeasible. For example, if one great asset of the product is the documentation then a truly volunteer-driven open
source version has no hope to stay relevant in my opinion. While everyone loves to read good documentation, writing it
requires a lot of work and it's not that funny... so must be paid work.
What you can get from volunteer work is at most a crappy documentation generator; the documentation itself (where the
real value is) requires a LOT of repetitive work that never ends (you must keep updating it as something evolves).
Popularity is a very bad proxy for quality
If something is very popular maybe it's good and worth your attention, but probably it's not. Popularity is a strange
property and the best proxy for popularity is popularity (i.e. the more something is popular and the more will gain in
popularity). It's a lottery. There is good stuff that will never be popular and literal rubbish that will be number 1 on
all charts for a long time.
Apply the 10 years rule
If you plan to invest your time on some topic, where was that topic 10 years ago? What makes you think it will be still
relevant in 10 years? IT is moving at an amazing speed and most of what is super trendy now will be completely forgotten
soon. If what you're interested in has a version number then the risk is high that a new version will come out pretty
soon. Maybe it's not a problem, but maybe it is.
The trick is that there is no trick
This last one is in my opinion the most important. Everyone is looking for tricks but the only good trick I found is
realizing that there is no trick and that you must just work your ass off. It's that simple.
I studied pure math at university (and loved it) but I always
worked programming computers, something for which I only have
a high-school diploma.
I took my degree in math while working full time because I
liked the subject (I fell in love with the book Algebra of
I. N. Herstein, one of the reasons of my change from
engineering to math).
I never formally used my degree for work (except that I was
able to work in the US thanks to it because immigration H1-B
rules at the time were translating "specialized" with
"university degree").
I don't believe too much in the value of formal studies:
much much more important to me is the personal dedication
and interest into the subject. For example, I know very good
programmers that have no formal training at all (not even
high-school level) and instead I know IT graduated that
literally don't understand anything they are talking about.
When I'm requested to evaluate someone, having a sci/tech
degree only means to me that the subject can (or could at
the time) provide some continuity: it is not a proxy for
being smart or being able to deliver in the specific field.
Of course NOT having a degree doesn't mean the subject
cannot provide continuity.
I am also a bad pianist, a bad chess player, bad speedcuber and
bad in sports. If you however never attempted seriously any of
these activities I am probably much better than you (with bad I
mean "worse than the average of who does it seriously").
Piano
I started studying piano when a kid because my mother wanted
us to try many different paths when young (thanks mom!).
I didn't practice much however, and in the end I dropped out
of the musical school after about three years of piano.
Many years later I fell in love with Chopin and got back to
piano this time self-teaching. The following is a video of
one of my early takes at what is considered the dream of
hobbyist piano players: Chopin Fantaisie Impromptu Op. 66...
Chess
I am a FIDE rated player with a current rating of about
1800. This means that I am weaker than the average of rated
players but that I just don't make random moves on the board
so even a professional player needs to pay some attention.
I am also a chess arbiter and a few times every year I spend
time in chess tournaments.
I also take care of online broadcasting of chess events
using a software I wrote myself after reverse-engineering
the serial protocol used by DGT chessboards (the software
provided with the board was pricey and, more importantly,
really really bad).
I own 5 DGT boards and about 20 wooden tournament boards and
clocks with which I'm happy to help friends when they need
to organize a chess tournament.
I also wrote several chess-playing programs some of which
ended up being stronger than me (writing a chess engine is a
very good programming exercise in my opinion, especially for
debugging).
Speedcubing
I first met Rubik's cube thanks to my brother in 1983, but
after a short time I forgot about it. Recently I found it
back in my mother's house and decided to look at what it is
now. I re-started learning it using more efficient methods
(I'm using CFOP)
and my average is now around 35 seconds... my aim is to get
to have an average below 30 seconds (it's going to be still
a long way :-D ).
I wonder why Rubik's cube is not used in math classes when
teaching about finite groups, it's IMO a very good example
and a lot of apparently strange concepts like subgroups,
cycles, closures, can be shown quite clearly on the cube and
other twisty puzzle variations.
I also own and love
a Megaminx,
but I'm still horribly slow at solving it... somehow I've
always had sort of a fetish for dodecahedrons.
Sports
I've been regularly practicing karate, ski and swimming for
some time; now I enjoy running for medium distances (about
40min). I've ran a few half-marathons (I was tempted by the
full marathon but read that while being of course feasible
by most people it's not really good for health as it's a bit
past the sustainable limit).
I live in Vigevano, a small city (~60k residents) not far from
Milan, in northern Italy.
I work for COMELZ, a company
that produces machines and software for the shoe/apparel/leather
goods market.
The company recently acquired
develer
a company created by long time friends from Florence.
Now the software team working on comelz products is much
bigger (tens of developers) and much more organized with all
pros and cons of the situation.
I've been basically of a lone wolf for very long time and
there are a few aspects of working in big groups I'm slowly
getting used to again (like meetings ;-) ).
I wrote much of the initial version of software for our
cutting machine family (CMxx/CZxx) and this allowed me to
work in several interesting areas:
## Axis control
In our company we produce ourselves the brushless
motors we use in our machines, both the mechanical
parts and the control electronic.
I wrote the software that computes online the space
positioning with 1ms accuracy minimizing total time
while keeping the motion below dynamic limits.
## Work plan optimization
The main cutting machine family is multi-headed, in
other words there are multiple cutting heads working
simultaneously on the same work area.
Simplifying a bit, work plan optimization is like
solving TSP instances adding the time dimension and
where there are multiple travelers that cannot get
closer than a certain distance from each other.
## Artificial vision
Our machines can also be equipped with cameras than
can be used for several purposes; from user
interface (pen tracking) to positioning error
compensation to printed material logo detection for
cut shape alignment to leather boundary detection.
I also recently worked on the interesting problem of
texture classification and found a novel "smart"
filtering approach that allows our machines to work
on previously difficult to handle materials (for
example single color or low contrast digital fabrics
where the design is visible only because of texture
change).
## Cut waste minimization
This is the very interesting problem of placing
parts inside a leather, sheet or roll so that the
minimum amount of material is wasted. I was able to
design an implement algorithms that performed quite
well compared to the current state of the art in
industry.
Nesting of a shoe model on leather avoiding defects.
Of course even the most basic version of the problem
is NP-complete as it can be trivially seen that as
1-D subcase you can model the knapsack.
This is also my main area of interest at the moment.
to name a few.
I also designed and partly implemented early versions of our
specialized CAD/CAM software for the shoe/apparel/leather
good industry
Caligola4:
a full reimplementation and extension of a previous CAD
software of the company.