<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title>frankie-tales</title><id>https://lovergine.com/feeds/tags/books.xml</id><subtitle>Tag: books</subtitle><updated>2026-02-25T15:33:03Z</updated><link href="https://lovergine.com/feeds/tags/books.xml" rel="self" /><link href="https://lovergine.com" /><entry><title>This was for every one: about the crisis of the web</title><id>https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-12-25T15:30:00Z</updated><link href="https://lovergine.com/this-was-for-every-one-about-the-crisis-of-the-web.html" rel="alternate" /><content type="html">&lt;p&gt;I just finished reading the delightful book by Sir Tim Berners-Lee, titled &lt;em&gt;This
is for Everyone&lt;/em&gt;, published this year. It is a trip, long, almost 400 pages,
about the origin and evolution of the World Wide Web, seen by those who
conceived and pushed it from the start. The entire first part of the book is
dedicated to the history of the web, the W3C, and the Web Foundation's
operations as we have known them in the first 30 years of its development, from
1989 onwards.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/timbl_tife.jpg&quot; alt=&quot;This is for everyone&quot; /&gt;&lt;/p&gt;&lt;p&gt;I was there at the very beginning of the 90s: I was connected to the Internet
since 1991, and reading such a book for a good part has been an emotional trip
in my memory of those events and people. He is a visionary and an idealist who
fought for an extended period to prevent his WWW creature from being intercepted
and disrupted by for-profit interests.&lt;/p&gt;&lt;p&gt;It happened almost from the start, when first NCSA, then Netscape, and Microsoft
tried one after the other to change the whole idea of openness into something
proprietary, driven through the same scheme of embracing, extending, and
extinguishing. In practice, the complete negation of standard and openness, with
a clear goal in mind: obtaining users' lock-in into proprietary products,
clearly for profit.&lt;/p&gt;&lt;p&gt;Tim provides evidence on multiple critical aspects of the current incarnation of
the net as we know it today and over the last 20 years or more. They are both
technical and social defects or drifts. The web is no longer what we learnt to
know in its first years of existence. The start of the end of the original
web concept was the mobile-first approach, which relegated the use of a regular
computer to a second-class experience for most users. Most of the digital-native
people never used a computer to access the network, and that user experience
deeply affects the current vision of the web.&lt;/p&gt;&lt;p&gt;For years, nowadays, a browser has not been the main program for accessing
content and services. Social networks are mostly not interoperable because
companies have little interest in having their users leave the walled gardens of
their apps. Using a browser and potentially exiting the company's services to
access other servers and spaces is tolerated, but is perceived as damaging
profits. That's simply because users are not users, but customers. The result is
&lt;a href=&quot;https://lovergine.com/the-shattered-internet.html&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;the shattered Internet about which I already wrote&lt;/a&gt;: the W3C standards are still
relevant, but embedded in applications and frameworks that enrich and upset the user
experience with proprietary workflows and extensions.&lt;/p&gt;&lt;p&gt;An emblematic case is Apple, which has, in practice, abandoned its WebKit engine
and Safari browser in favor of apps and proprietary services to monetize
customers and companies.&lt;/p&gt;&lt;p&gt;The concrete risk is that the whole web and its standards would become a
marginalized component of the net, while most users are confined to walled-off
realms of proprietary services and social networks. The recent AI innovation can
mark the definitive end chapter of web content creation and search as we have
used them over the last 30 years. More and more users will limit themselves to
AI-provided overviews instead of collecting and consulting multiple sources of
information and independent services. That will also have a concrete impact on
revenues and interest in content creation and provision at large.&lt;/p&gt;&lt;p&gt;The second part of the book is fully dedicated to all such problems: the impact
of social networks, the last few years of generative AI, the BigCo dominance,
and includes all Tim's worries for the foreseeable future.  He's an idealistic,
optimistic, and positive guy due to his past experiences.  However, he also has
a good dose of sane realism. He understands that the path is nebulous and full
of dangers (specifically, the AI path is highly polarizing and can hide multiple
issues at many levels).&lt;/p&gt;&lt;p&gt;He sees in the indie web, and specifically in open and well-structured
distributed standards (such as the ActivityPub protocol), a possible way to
change the present and future by favoring interoperability and independence. A
concrete proposal is the Solid standard for personal data wallets (or pods in
Solid terminology) under complete user control for accessibility by third-party
services. Such a standard is still in its infancy, but the true problem I see is
the trustworthiness of involved parties, both companies and governments.
Trusting is the key, and maybe we all individually lost such a superpower a long time ago.&lt;/p&gt;&lt;p&gt;Creating a corpus of rules to manage all such technologies and ensure ethical
behavior can be a desperate illusion; the only concrete alternative would be to
opt out, at the cost of exclusion from the social context (not only the digital
one). But I agree there is no other way to recover the original idea of the web.
The AI technologies are even more polarizing, among doomers and boomers, with a
bumpy road ahead. For sure, open protocols and distributed multi-peer services
are the inevitable starting point, but they won't be enough.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;&amp;quot;It was not enough simply to release new technology and hope for the world to
improve. You had to develop technology and society together. You really had to
fight, in a principled and continuous way, for human rights. The web offered
people a platform for their voices to be heard, reducing the cost of publishing
and distributing information to effectively nothing. But, used improperly, it
could also be turned into a tool of surveillance and control.&amp;quot;  (timbl)&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;references&quot;&gt;References&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://us.macmillan.com/books/9780374612467/thisisforeveryone/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Tim Berners-Leei, &lt;em&gt;This is for everyone:The unfinished story of the World Wide Web&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://www.edelman.com/sites/g/files/aatuss191/files/2025-01/2025%20Edelman%20Trust%20Barometer_U.S.%20Report.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;2025 Edelman Trust Barometer: Trust and the Crisis of Grievance&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://solidproject.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;The Solid project&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</content></entry><entry><title>A call to minimalistic programming</title><id>https://lovergine.com/a-call-to-minimalistic-programming.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-09-10T17:00:00Z</updated><link href="https://lovergine.com/a-call-to-minimalistic-programming.html" rel="alternate" /><content type="html">&lt;p&gt;Minimalism in development is a forgotten virtue of our time that should gain
more attention. A straightforward summary of some minimalism principles is
available &lt;a href=&quot;http://minifesto.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;here&lt;/a&gt;. Briefly, the principles of minimalism
in Software Engineering can be summarized as follows, based on the manifesto for
minimalism.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;em&gt;Fight for Pareto's law&lt;/em&gt;: look for the 20% of effort that will yield 80% of the results.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Prioritize&lt;/em&gt;: minimalism isn't about not doing things but about focusing first on the important.&lt;/li&gt;&lt;li&gt;&lt;em&gt;The perfect is the enemy of the good&lt;/em&gt;: first do it, then do it right, then do it better.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Kill the baby&lt;/em&gt;: don't be afraid of starting all over again. Fail soon, learn fast.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Add value&lt;/em&gt;: continuously consider how you can support your team and enhance your position in that field or skill.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Basics, first&lt;/em&gt;: always follow top-down thinking, starting with the best practices of computer science.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Think differently&lt;/em&gt;: simple is more complicated than complex, which means you'll need to use your creativity.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Synthesis is the key to communication&lt;/em&gt;: we have to write code for humans, not machines.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Keep it plain&lt;/em&gt;: try to keep your designs with a few layers of indirection.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Clean kipple and redundancy&lt;/em&gt;: minimalism is all about removing distractions.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Most of those principles are coherent with each other and relate heavily to the
well-known Unix &lt;a href=&quot;https://en.wikipedia.org/wiki/KISS_principle&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;KISS principle&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;An extended and fascinating book about the practical application of such
principles is Eric S. Raymond's &lt;a href=&quot;http://www.catb.org/~esr/writings/taoup/html/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;&amp;quot;The Art of Unix Programming&amp;quot;&lt;/em&gt;&lt;/a&gt;, which I
strongly recommend reading. I can also recommend a now-classic volume on the
same topic by John Ousterhout, &lt;a href=&quot;https://web.stanford.edu/~ouster/cgi-bin/book.php&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;&amp;quot;A Philosophy of Software Design&amp;quot;&lt;/em&gt;&lt;/a&gt;. Both outline
practical examples of how minimalism in design can be effectively embraced, with
a focus on doing the right thing sooner rather than later.&lt;/p&gt;&lt;p&gt;The same principles could (or maybe should) be applied even to programming
languages, but this is often a neglected aspect of such a minimalistic approach.
Note that one of the most successful languages of all time is the C language,
which indeed has a straightforward syntax and, as such, cannot be easy to use
correctly (the principle is that what is simple is not necessarily easy, too).
That's because the programmer needs to create her/his own abstractions and
layers to build her/his vision of a software design. It seems that this is
precisely the opposite of the C++ or Java approach, where the entire
specification spans thousands of pages, and many high-level abstractions are
integral parts of the language. The same can be applied to Python nowadays,
which started as a simple language, more readable and clean than Perl, but now
has a wide and articulated specification. Again, hundreds of pages are now
needed to describe a once-simple language, where tons of new features and
abstractions have been added to enrich its expressiveness.  If one considers its
standard libraries and modules, the actual situation appears even worse.  Can
such an approach be considered &lt;em&gt;easier&lt;/em&gt;? I don't think so. Let me say: how can a
program be considered simple if it relies on hundreds (or even thousands,
including dependencies recursively) of external modules, as well as hundreds
of syntactical constructs and glues?  Some languages also
manage multi-versioned dependencies, allowing a program to cross-depend on
multiple editions of the same module (yes, JavaScript, I'm talking about you),
with the concrete possibility of introducing obscure bugs as a result. At the
opposite extreme, there is the consideration that we only know and deeply
understand what we make.&lt;/p&gt;&lt;p&gt;Minimalism also means actively seeking a balance between these two opposing
approaches, because reusing third-party modules and packages can be an immediate
solution to deadline urgencies, but can also potentially introduce instability
and dependencies on unmaintained software in the long run.
Long dependency chains where changes happen independently of the main program
focus and are introduced by third-party motivations and reasons - often with wrong
timing for depending projects - can cause breakages at multiple levels.&lt;/p&gt;&lt;p&gt;Of course, to reach
the right tradeoff, a few things need to be considered: every single programmer
could not be smarter than a lot of libraries and modules out there, where
multiple developers could have spent hours/weeks/months, or even years refining
them. That's true, but it is also true that not all libraries or modules are
written with the same level of quality and effort. For instance, we all know
cases of elementary modules available for Node that could be easily avoided, and
instead are imported for some form of laziness in development.  Even, sometimes
features that need to be used could be only a small portion of the whole
library/module, which could be reimplemented with a very reasonable effort and
time. This approach could be amplified in modern times when AI tools could
significantly increase productivity in such cases. I would simplify these
concepts with some additional mottos:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;em&gt;Limit your external dependencies&lt;/em&gt;: avoid depending on modules or libraries
that are not strictly required to significantly reduce the total development
time, are not rock stable for their interfaces and features, and do not have
a clear and stabilized roadmap.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Reproducibility of the software stack is a must&lt;/em&gt;: these days,
&lt;a href=&quot;https://en.wikipedia.org/wiki/Software_supply_chain&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;a SBOM&lt;/a&gt; has
become recommended/mandatory, but it should not only consist in a documentation of external
dependencies and their versions, but also the full process of building a
runtime environment should be fully defined and consistent for the long
term.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Do not follow the last oh!-so-cool technology&lt;/em&gt;: while that could be done for
an amateur project to develop during spare time, it is not a good idea
depending on a technology whose future is not clearly stated, with a
well-established development team and proven sustainability in the long
term. I consider a risk even depending on a single company project, and even
more if it is considered a startup.  Synthetically, this can be generically
considered as minimalism in coding style.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Moreover, if you are going to use a well-established framework, such as Django,
for developing your mid-to-long-term web project, it is probably better than
using the latest Nodejs-based framework created six months ago that seems the
latest 'big thing'. But that's probably only common sense. Instead, ask yourself
if your project should be created from scratch using a simple &lt;em&gt;jamstack system&lt;/em&gt;
and some microservices for well-defined and minimal parts. In many cases, that
is more than enough for too many CMS-based sites out there: indeed, I
continuously ask myself why a lot of websites are still based on WordPress, when
most of them could be easily converted into a handful of static pages and simple
JavaScript snippets that they will use in any case. This can be declined in
terms of minimalism in defining computing architectures, which can also allow
scaling up applications more easily.&lt;/p&gt;&lt;p&gt;So minimalism principles can be considered at multiple levels: for programming
languages, libraries, architectures, and design. However, they require skills,
in-depth research, and a significant amount of time to dedicate to continuous
refactoring and meditation about viable alternatives. And that's probably the
key point: developers with deadlines and urgency imposed by PMs are too often
tempted to follow the easiest and richest paths and provide a solution of any
kind without too much meditation on the final balance among efforts, quality,
efficiency, and durability of results.&lt;/p&gt;&lt;p&gt;Of course, about minimalism, an extraordinary citation is due for the whole
&lt;a href=&quot;https://suckless.org&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;suckless effort&lt;/a&gt; on the uncompromising minimalism side.
And &lt;a href=&quot;https://motherfuckingwebsite.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;why not?&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Ok, ok, I'm joking. But you got the point.&lt;/p&gt;</content></entry><entry><title>Still, no silver bullet</title><id>https://lovergine.com/still-no-silver-bullet.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2025-08-24T18:30:00Z</updated><link href="https://lovergine.com/still-no-silver-bullet.html" rel="alternate" /><content type="html">&lt;p&gt;I recently re-read the seminal book by Fred Brooks about software engineering,
entitled &amp;quot;The Mythical Man-Month&amp;quot; or MM-M for brevity. Specifically, I read the
paper version of the 20th anniversary, which was revised and reprinted in 1995,
after the first edition of 1975. I did that on purpose, firstly because it is
always a fantastic read, and secondly to understand how much of its contents is
still valid today, exactly thirty years later since its last revision.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;/images/mm-m.png&quot; alt=&quot;The Mythical Man-Month&quot; /&gt;&lt;/p&gt;&lt;p&gt;Fred missed in 2022; otherwise, it would still be interesting to know his thinking
nowadays after the LLMs boom and the birth of the AIAD (AI Aided Development) as
a new revolutionary (or often seen as such) tool. Hi, Fred, wherever you are.
It is worth mentioning that AI was already taken into consideration by Brooks at
the time, even if limited to expert systems and other rule-based variants, which
seemed promising and often sold as revolutionary before the mid-90s.  A lot of
the book's contents remained in the history of software engineering, including
the famous &lt;a href=&quot;https://en.wikipedia.org/wiki/Brooks%27s_law&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Brooks' Law&lt;/a&gt;, and the
whole book is still an excellent source of inspiration for any management and
organization of complex intellectual projects (not necessarily limited to
software systems), that heavily includes large teams of individuals.&lt;/p&gt;&lt;p&gt;One of the main theses of the latest book edition is that in the short term of
10 years from its proposition (the original essay was dated 10 years after the
first edition of the book), he did not expect a &lt;a href=&quot;https://en.wikipedia.org/wiki/No_Silver_Bullet&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;silver bullet&lt;/em&gt;&lt;/a&gt;.
That means no significant technological or managerial development was expected to be able to
improve our productivity in programming by one order of magnitude. Ten years
later, he confirmed the same idea, even considering exceptional tools like
old-generation AI, visual programming, CASE tools, and so on.
Is this thesis contradicted in 2025 by the existence of current AIAD tools,
including chatbots, agents, and AI-empowered IDEs? My honest idea is no. I mean
not now and not for the foreseeable future. The reason is exactly the same as
Fred posed at the time. Reducing &lt;em&gt;accidental&lt;/em&gt; problems in software creation
(what AI is able to do) cannot be confused with the &lt;em&gt;essential&lt;/em&gt; problems in
software creation: the complexity of defining an articulated task, its
analytical specifications, and an algorithmic solution to solve it.  First of
all, ignore the simplistic case of asking an AI engine to implement a very
&lt;em&gt;simple&lt;/em&gt; program. Here, the word simple means truly that. If you can specify a
request by a whatever large manageable token context and formulate your request
in terms of a brief question (let me suppose a question of some dozens of rows),
well, that's probably an example of a simple (or dumb) problem. Too few, too
easy. We are talking about a whole system that is generally difficult to
describe, even in thousands of pages of specifications and documentation,
written collectively by large teams of developers, architects, and domain
experts.&lt;/p&gt;&lt;p&gt;The hard truth is that most of the real-world information systems out there
cannot simply be specified in such a way. We are not able to define an
unambiguous and complete enough specification to describe such systems, not to
mention being truly able to write a complete and neat documentation of it,
including its inner workings and use. We live in a deep illusion about that. The
context size to get the required level of details to avoid bugs and ambiguities
in specification would be impractical for current and even future tools, as well
as for any humans. We would get in any case buggy (i.e., incomplete or
misunderstood) results even if the AI engine were able to avoid hallucinations
(which is not the case) and had no limitations for context size. The presence of
AI hallucinations is only accidental in this regard.&lt;/p&gt;&lt;p&gt;In the current AI tooling, we are simply moving the complexity from the writing
of a formal language step by step to using a natural language with a higher
level of abstraction to express the problem. The complexity is still there, and
it is inherent to the problem. Again, we resolved an accidental difficulty in a
creative manner, not different from moving from assembly to using a modern
programming language. Now the difficulty has moved elsewhere, but it is still
there, and natural language is even more complicated to use compared to formal
language. These difficulties translate into multiple refinements and trials to
try to be more precise and get sensible answers and code in a continuous
iteration. Isn't that so similar to the whole ordinary process of developing a
program? In the most simplistic approach, such a process becomes &lt;em&gt;vibe coding&lt;/em&gt;,
and iteration could tend to infinity, with a forever loop. The smarter
programmer for an easy task will do that in a reasonable (limited, hopefully)
number of iterations, instead.  Is that a significant improvement of one order
of magnitude? I think not, as in most cases for the past. As in the case of
high-level languages instead of assembly, they improved efficiency in coding as
asserted in MM-M, but not by a whole order of magnitude. The AIAD is again
another helper to solve accidental difficulties. The problem and all its
complexity are still there. Thinking that we found the silver bullet is again
(and again) an illusion or pure marketing.&lt;/p&gt;&lt;p&gt;So why do many CEOs insist on predicting a bright yet unlikely future of AI
agents instead of having developers create applications? Brooks already wrote
about that: there is a profound confusion in exchanging months and men, and an
excess in optimism when approaching software development, even among techies,
but that becomes paroxysmal among managers. None can seriously provide even a
decent and reasonable evaluation of development time starting from incomplete,
ambiguous, or vague specifications: the same simply happens systematically in
overestimating the capabilities of current AI tools.&lt;/p&gt;&lt;p&gt;So what? AIAD is simply yet another tool among those available to developers,
but the management problem of dominating complex projects is still there, with
all its inherent difficulties. And the possibility to use natural language
instead of a high-level formal one is only an apparent simplification of the
process. It looks more familiar and easier, but it is also much more
ambiguous, and the so-called &lt;em&gt;prompt engineering&lt;/em&gt; is again a pure optimistic
illusion, an euristhic approach to try to overcome our totally insufficient
capabilities of dominating nuances and semantics.&lt;/p&gt;</content></entry><entry><title>The shattered Internet</title><id>https://lovergine.com/the-shattered-internet.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-08-02T13:00:00Z</updated><link href="https://lovergine.com/the-shattered-internet.html" rel="alternate" /><content type="html">&lt;p&gt;I recently finished reading &lt;a href=&quot;https://www.bollatiboringhieri.it/libri/vittorio-bertola-internet-fatta-a-pezzi-9788833942018/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;a book published one year
ago&lt;/a&gt;,
written by &lt;a href=&quot;https://bertola.eu/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Vittorio Bertola&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Stefano_Quintarelli&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Stefano Quintarelli&lt;/a&gt;.
Unfortunately, it is only available in Italian, but its title perfectly encloses all the topics
it covers: &lt;em&gt;The shattered Internet: digital sovereignty, nationalisms, and big
techs&lt;/em&gt;.  Like me, Vittorio and Stefano are among the relatively few early users and
participants of the primeval internet network of the 90s, even before the World
Wide Web was conceived. This book is a disenchanted and realistic travel in the
story of the &lt;em&gt;Big Network&lt;/em&gt; and how it has become a broken dream today in many
respects.&lt;/p&gt;&lt;p&gt;Thinking about it, it also shares some of the reasons why I started this
self-hosted blog recently. At the end of this post, one could also consider
that this site and the whole &lt;em&gt;indie web&lt;/em&gt; movement make little sense altogether.
Simply, they represent another unrealistic attempt to return to the origin.
In short, it's just a daydream. Maybe, or maybe not.&lt;/p&gt;&lt;p&gt;The Internet has been conceived from the beginning as a great, unified,
worldwide and resilient web of neutral connections based on open technical
standards and cooperation among developers and participants to allow
end-to-end communications all over the world, without discrimination.  At its very
beginning, in the middle of the 90s, it appeared to be a realized dream to the
most tech-savvy people.&lt;/p&gt;&lt;p&gt;Unfortunately, reality later started to appear in all
its hard truth.  The world is not neutral and equal for all human beings, and
there are multiple drivers of inequality and diversity. Moreover, human groups
tend to create private &lt;em&gt;walled gardens&lt;/em&gt; with deep moats among themselves, often
for the mere interests of the few.&lt;/p&gt;&lt;p&gt;Nowadays, there are at least two great sources of fragmentation for the
Internet, because of its own worldwide success. Nationalisms (and let me also say
different ways of seeing life, values, and our society itself) and the creation
of an oligopoly of a few big companies that dominate the network. Companies are
interested in making a profit and maintaining their walled gardens with millions
of users-customers locked in there.
This is not something new, but it is
a big problem when companies have balances that are more outstanding than those of many countries.&lt;/p&gt;&lt;p&gt;These centrifugal thrusts are shattering every day more the dream of the
big, unique and pacific network.
Internet users are more and more closed in limited bubbles, because
of their nationalities and cultures or the profit plans of the big corps.&lt;/p&gt;&lt;p&gt;Note that - as the book's authors - I don't think that the occidental US-centric
world has the correct/absolute answers for that. In many cases, I cannot share some ideas and
values considered &lt;em&gt;standard thinking&lt;/em&gt; overseas. I don't even know
if the tentative regulation policies here in Europe will succeed in creating
a better and respectful network.&lt;/p&gt;&lt;p&gt;Moreover, in many countries the Internet is limited and under the control/monitoring of central authorities, and I'm not
talking only about North Korea, China, Russia, Iran, or other nations with some known issues
in accessing the network. As we all discovered in the immediate past, even the so-called free
democracies &lt;a href=&quot;https://en.wikipedia.org/wiki/Edward_Snowden&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;show their fallacies&lt;/a&gt; from time to time.&lt;/p&gt;&lt;p&gt;Anyway, as tech-savvy individuals, we have the right and duty to escape as much as possible
from the mainstream short-field vision of the network, by diversifying and
avoiding the walled gardens, as well as
the &lt;a href=&quot;https://en.wikipedia.org/wiki/Pens%C3%A9e_unique&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;unique thought&lt;/em&gt;&lt;/a&gt; for the
&lt;a href=&quot;https://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;evolution of the society&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;It is a matter of freedom and equality for all of us, even if it is wishful thinking.
And above all, even if many people out there do not care and are willing to give up
their privacy and freedom, too.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;Wake up, Neo...
The Matrix has you...
Follow the white rabbit...
Knock, knock, Neo.&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Wohpe</title><id>https://lovergine.com/wohpe.html</id><author><name>Francesco P. Lovergine</name><email>mbox@lovergine.com</email></author><updated>2024-07-24T17:50:00Z</updated><link href="https://lovergine.com/wohpe.html" rel="alternate" /><content type="html">&lt;p&gt;We recently returned from our traditional Dolomites holiday period, and while there
I eventually managed to read the first novel written by &lt;a href=&quot;http://invece.org/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Salvatore
Sanfilippo&lt;/a&gt; aka &lt;code&gt;antirez&lt;/code&gt;.  Most of you probably know
Salvatore because he has been for a long time the leading developer of
&lt;a href=&quot;https://en.wikipedia.org/wiki/Redis&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Redis&lt;/a&gt; until he came out from the company
that now holds the ownership of the project, some years ago. He often compared
the creative work of a developer with that of a novelist and I was curious about
his first book.&lt;/p&gt;&lt;p&gt;First of all, he probably started to write that novel in 2019, and I bought it
in October of 2022, largely before the ChatGPT and the AI hype. Therefore, it cannot be
considered an &lt;em&gt;instant book&lt;/em&gt;, written after some big public event, just to ride
the wave. It has been written by antirez after two years of efforts and refinements.&lt;/p&gt;&lt;h2 id=&quot;wohpe&quot;&gt;Wohpe&lt;/h2&gt;&lt;p&gt;The novel is a sci-fi book centered mainly on a couple of interesting
themes, climate change and General Artificial Intelligence. Wohpe is the contraction
of &lt;em&gt;world hope&lt;/em&gt; and the whole story deals with the central idea of using advanced
self-aware AI to find the &lt;a href=&quot;https://en.wikipedia.org/wiki/Silver_bullet&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;silver bullet&lt;/a&gt;
to manage the climate change that, in a distopic vision of a not-so-far future, shall
cause a certain extinction of humanity.&lt;/p&gt;&lt;p&gt;The novel deals with the long months of development of a large neural network that is so
big to reach the fictional threshold of self-awareness and become so intelligent
to possibly find a final solution to the climatic catastrophe.&lt;/p&gt;&lt;p&gt;I will not give more details about the plot to avoid too much spoiling for people interested
in reading the novel themselves.&lt;/p&gt;&lt;p&gt;Fun fact: Wohpe is also the name of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Wohpe&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;spirit of Peace&lt;/a&gt;
for the Lakota people - a Sioux tribe - and that could be not so casual after reading the book.&lt;/p&gt;&lt;h2 id=&quot;about-the-book&quot;&gt;About the book&lt;/h2&gt;&lt;p&gt;I found the writing style in some way very similar to that of classic sci-fi novelists
I like, such as &lt;a href=&quot;https://it.wikipedia.org/wiki/Isaac_Asimov&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Isaac Asimov&lt;/a&gt;.
If you read the classic &lt;a href=&quot;https://en.wikipedia.org/wiki/Foundation_(book_series)&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;Foundation trilogy&lt;/em&gt;&lt;/a&gt;,
you can recognize the same slow times of development, with sudden improvements in the narration
until the climax of the novel.&lt;/p&gt;&lt;p&gt;A style I personally like, but a lot of people dislike, as well. That motivates probably
some negative comments I read about the book, considered too slow or full of unnecessary details
that render some parts of the story heavy. I found instead the book well-balanced and
of course with the reasonable amount of technical details about AI technologies that one
could expect by antirez. We are nerds, after all.&lt;/p&gt;&lt;p&gt;I found much of the story well-centered and convincing about a possible future
for AI applications that are currently in their very beginning.&lt;/p&gt;&lt;p&gt;The story focuses on AI applications, which are currently in their infancy, is
both compelling and thought-provoking. It presents a convincing and potentially
realistic future for those technologies, which I found to be a fascinating
aspect of the book. Even, antirez's undoubted technical competence ensure a
well-constructed scenario for their applications, with the right weighting of
details and no improbable or too imaginative ideas about the reality
of neural networks.&lt;/p&gt;&lt;p&gt;Even in a distopic context, the book is positively optimistic, maybe too
much for my inclination.
I would like to share antirez's faith in human capabilities of dealing
effectively with events that largely go beyond the knowledge and understanding
of the average individual, as well as the possibility for single individuals
to be relevant for humanity fate.&lt;/p&gt;&lt;p&gt;Let me discuss now the two main topics of the book and my personal ideas
about them.&lt;/p&gt;&lt;h2 id=&quot;artificial-intelligence-the-next-revolution&quot;&gt;Artificial Intelligence, the next revolution&lt;/h2&gt;&lt;p&gt;The hypothesis of a theoretical network size limit (or number of parameters)
for self-awareness it too simple to be true, but a fascinating one, indeed.
But for that, most of the applications of AI as described in the book are practically
possible in the next future, maybe even before the next 10 or 20 years term: in
some contexts, things will change heavily in maybe less than 5 years, in my
vision for the next future on those regards.&lt;/p&gt;&lt;p&gt;The idea that the AI shall improve effectively the quality of life of human beings,
after a period of adjustments and a mixture of hope and fear for the new wave,
is a credible scenario. We already saw the same for Internet in the latest dozens of
years, another technology that changed the lives and jobs of a large part of people in many
countries. I predicted the Internet revolution in the beginning of 90s, I'm quite sure
that even in this case, I will be right: &lt;em&gt;if&lt;/em&gt; an excessive regulation
(driven by human fear of novelties) will not dismantle the whole thing at the very beginning,
which could be one of the possibilities in this crazy world.&lt;/p&gt;&lt;h2 id=&quot;climate-change-the-next-challenge&quot;&gt;Climate change, the next challenge&lt;/h2&gt;&lt;p&gt;I am more worried about the implication of the climate change in the medium term for humanity.
We already are missing the inital goal of limiting the increase in average
temperatures under 1.5°C before 2050, and I seriously doubt that our
models can predict effectively changes in our daily life
that we will have to deal with in the next few years.&lt;/p&gt;&lt;p&gt;We seriously could face big problems in many parts of the world a lot of
time before a Wohpe can help us. And on those regards we are not brilliant in
predicting disasters and acting effectively. We are generally and simply
regretting - disaster after disaster - for what we did not do instead.&lt;/p&gt;&lt;p&gt;Here in the Mediterranean area, we already are experiencing hot summers with
weeks of continuous high temperatures and humidity, reduced rain (even during
the winter) in many urban areas, as never seen until a few years ago.
Unfortunately, for the average citizen, it seems a problem of someone else, with
even a sense of nuisance for the few initiatives adopted to limit the worldwide
CO2 production.&lt;/p&gt;&lt;p&gt;Under those regards, we have not a better attitude than that of the silly folks
shown in the &lt;a href=&quot;https://it.wikipedia.org/wiki/Don%27t_Look_Up&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;&lt;em&gt;Don't Look Up!&lt;/em&gt;&lt;/a&gt;
movie, just waiting for the meteorite.&lt;/p&gt;&lt;p&gt;I don't know if an AI can help us with all that. For sure, I would
prefer that people generally be aware of the problem and all
act as a consequence even in their daily lives.&lt;/p&gt;</content></entry></feed>