Vienna - the capital of Austria - is one of the most visited cities in the world, popular for its rich history, gardens, and cafes, along with well-known artists like Beethoven, Mozart, Gödel, and Freud. It has also been consistently ranked as the most livable city in the world.
For these reasons, I was elated when my friend Snehal invited me last year to visit Vienna for a few days. We included Christmas and New Year’s Eve in my itinerary due to the city’s popular Christmas markets and lively events. The festive season also ensured that Snehal had some days off for sightseeing.
Indians require a visa to visit Austria. Since the travel dates were near, I rushed to book an appointment online with VFS Global in Delhi, and quickly arranged the required documents. However, at VFS, I found out that I had applied in the wrong appointment category (tourist), which depends on the purpose of the visit, and that my travel dates do not allow enough time for visa authorities to make a decision. Apparently, even if you plan to stay only for a part of the trip with the host, you need to apply under the category “Visiting Friends and Family”.
Thus, I had to book another appointment under this category, and took the opportunity to shift my travel dates to allow at least 15 business days for the visa application to be processed, removing Christmas and New Year’s Eve from my itinerary.
The process went smoothly, and my visa application was submitted by VFS. For reference, here’s a list of documents I submitted -
VFS appointment letter
Duly-filled visa application form
Original passport
Copy of passport
1 photograph
My 6 months bank account statement
Cover letter
Consent form (that visa processing will take up to 15 business days)
Snehal’s job contract
My work contract
Rent contract of Snehal
Residence permit of Snehal
A copy of Snehal’s passport
Invitation letter from Snehal
Return flight ticket reservations
Travel insurance for the intended travel dates
The following charges were collected from me.
Service Description
Amount (Indian Rupees)
Cash Handling Charge - SAC Code: (SAC:998599)
0
VFS Fee - India - SAC Code: (SAC:998599)
1,820
VISA Fee - India - SAC Code:
7,280
Convenience fee - SAC Code: (SAC:998599)
182
Courier Service - SAC Code: (SAC:998599)
728
Courier Assurance - SAC Code: (SAC:998599)
182
Total
10,192
I later learned that the courier charges (728 INR) and the courier assurance charges (182 INR) mentioned above were optional. However, VFS didn’t ask whether I wanted to include them. When the emabssy is done processing your application, it will send your passport back to VFS, from where you can either collect it yourself or get it couriered back home, which requires you to pay courier charges. However, courier assurance charges do not add any value as VFS cannot “assure” anything about courier and I suggest you get them removed.
My visa application was submitted on the 21st of December 2023. A few days later, on the 29th of December 2023, I received an email from the Austrian embassy asking me to submit an additional document -
Subject: AUSTRIAN VISA APPLICATION - AMENDMENT REQUEST: Ravi Dwivedi VIS 4331
Dear Applicant,
On 22.12.2023 your application for Visa C was registered at the Embassy. You are requested to kindly send the scanned copies of the following documents via email to the Embassy or submit the documents at the nearest VFS centre, for further processing of your application:
Kindly submit Electronic letter of guarantee “EVE- Elektronische Verpflichtungserklärung” obtained from the “Fremdenpolizeibehörde” of the sponsor’s district in Austria. Once your host company/inviting company has obtained the EVE, please share the reference number (starting from DEL_____) received from the authorities, with the Embassy.
Kindly Note:
It is in your own interest to fulfil the requirements as indicated above and submit the missing documents within 14 days of the receipt of this email. Otherwise a decision will be taken based on the documentation available.
“Sie werden in Ihrem Interesse ersucht, die gekennzeichneten Mängel so schnell wie möglich zu beheben bzw. fehlende Unterlagen umgehend nachzureichen, um die weitere Bearbeitung des Antrages zu ermöglichen. Sollten Sie innerhalb 14 Tagen die gekennzeichneten Mängel nicht beheben bzw. die fehlenden Unterlagen nicht nachreichen, wird über den vorliegenden Antrag ohne diese Unterlagen bzw. Mängelbehebung entschieden.”
Austrian Embassy New Delhi
I misunderstood the required document (the EVE) to be a scanned copy of the letter of guarantee form signed by Snehal, and responded by attaching it.
Upon researching, Snehal determined that the document is an electronic letter of guarantee, and is supposed to be obtained at a local police station in Vienna. He visited a police station the next day and had a hard time conversing due to the language barrier (German is the common language in Austria, whereas Snehal speaks English). That day was a weekend, so he took an appointment for Monday, but in the meantime the embassy had finished processing my visa.
My visa was denied, and the refusal letter stated:
The Austrian embassy in Delhi examined your application; the visa has been refused.
The decision is based on the following reason(s):
The information submitted regarding the justification for the purpose and conditions of the intended stay was not reliable.
There are reasonable doubts as to your intention to leave the territory of the Member States before the expiry of the visa.
Other remarks:
You have been given an amendment request, which you have failed to fulfil, or have only fulfilled inadequately, within the deadline set.
You are a first-time traveller. The social and economic roots with the home country are not evident. The return from Schengen territory does therefore not seem to be certain.
To summarize -
If you are visiting a host, then the category of appointment at VFS must be “Visiting Friends and Family” rather than “Tourist”.
VFS charged me for courier assurance, which is an optional service. Make sure to get these removed from your bill.
Neither my travel agent nor the VFS application center mentioned the EVE.
Snehal informed me that a mere two months ago, his wife’s visa was approved without an EVE. This hints at inconsistency in processing of applications, even those under identical categories.
Such incidents are a waste of time and money for applicants, and an embarrassment to VFS and the Austrian visa authorities. I suggest that the Austrian visa authorities fix that URL, and provide instructions for hosts to obtain the EVE.
Credits to Snehal and Contrapunctus for editing, Badri for proofreading.
On Saturday 3 August 2024, the annual Debian Developers and Contributors
Conference came to a close.
Over 339 attendees representing 48 countries from around the world came
together for a combined 108 events made up of Talks, Discussions, Birds of a
Feather (BoF) gatherings, workshops, and activities in support of furthering
our distribution and free software (25 patches submitted to the Linux kernel),
learning from our mentors and peers, building our community, and having a bit
of fun.
The conference was preceded by the annual
DebCamp hacking session held July 21st
through July 27th where Debian Developers and Contributors convened to
focus on their Individual Debian related projects or work in team sprints
geared toward in-person collaboration in developing Debian.
This year featured a BootCamp that was held for newcomers with a GPG
Workshop and a focus on Introduction to creating .deb files (Debian
packaging) staged by a team of dedicated mentors who shared hands-on
experience in Debian and offered a deeper understanding of how to work in
and contribute to the community.
The actual Debian Developers Conference started on Sunday July 28 2024.
In addition to the traditional 'Bits from the DPL' talk, the continuous
key-signing party, lightning talks and the announcement of next year's
DebConf25, there were several update sessions shared by internal projects
and teams.
Many of the hosted discussion sessions were presented by our technical
core teams with the usual and useful meet the Technical Committee and the
ftpteam and a set of BoFs about packaging policy and Debian infrastructure,
including talk about APT and Debian Installer. Internationalization and
localization have been subject of several talks. The Python, Perl, Ruby,
and Go programming language teams, as well as Med team, also shared updates
on their work and efforts.
More than fifteen BoFs and talks about community, diversity and local
outreach highlighted the work of various team involved in the social
aspect of our community. This year again, Debian Brazil shared strategy
and action to attract and retain new contributors and members and
opportunities both in Debian and F/OSS.
The schedule
was updated each day with planned and ad-hoc activities introduced by
attendees over the course of the conference. Several traditional activities
were performed: a job fair, a poetry performance, the traditional
Cheese and Wine party, the group photos and the Day Trips.
For those who were not able to attend, most of the talks and sessions were
videoed for live room streams with the recorded videos made available
through a link in their summary in the
schedule.
Almost all of the sessions facilitated remote participation via IRC
messaging apps or online collaborative text documents which allowed remote
attendees to 'be in the room' to ask questions or share comments with the
speaker or assembled audience.
DebConf24 saw over 6.8 TiB of data streamed, 91.25 hours of scheduled talks,
20 network access points, 1.6 km fibers (1 broken fiber...) and 2.2 km UTP
deployed, more than 20 country Geoip viewers, 354 T-shirts, 3 day trips,
and up to 200 meals planned per day.
All of these events, activities, conversations, and streams coupled with our
love, interest, and participation in Debian and F/OSS certainly made this
conference an overall success both here in Busan, South Korea and On-line
around the world.
The DebConf24 website
will remain active for archival purposes and will continue to offer
links to the presentations and videos of talks and events.
Next year, DebConf25 will be held
in Brest, France, from Monday, July 7 to Monday, July 21, 2025. As
tradition follows before the next DebConf the local organizers in France
will start the conference activities with DebCamp with particular focus on
individual and team work towards improving the distribution.
Debian thanks the commitment of numerous
sponsors
to support DebConf24, particularly our Platinum Sponsors:
Infomaniak,
Proxmox,
and Wind River.
We also wish to thank our Video and Infrastructure teams, the DebConf24
and DebConf committees, our host nation of South Korea, and each and every
person who helped contribute to this event and to Debian overall.
Thank you all for your work in helping Debian continue to be "The Universal
Operating System".
See you next year!
About Debian
The Debian Project was founded in 1993 by Ian Murdock to be a truly free
community project. Since then the project has grown to be one of the
largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and maintain
Debian software. Available in 70 languages, and supporting a huge range
of computer types, Debian calls itself the universal operating system.
About DebConf
DebConf is the Debian Project's developer conference. In addition to a
full schedule of technical, social and policy talks, DebConf provides an
opportunity for developers, contributors and other interested people to
meet in person and work together more closely. It has taken place
annually since 2000 in locations as varied as Scotland, Argentina, Bosnia
and Herzegovina, and India. More information about DebConf is available from
https://debconf.org/.
About Infomaniak
Infomaniak is an independent cloud service
provider recognized throughout Europe for its commitment to privacy, the
local economy and the environment. Recording growth of 18% in 2023, the
company is developing a suite of online collaborative tools and cloud
hosting, streaming, marketing and events solutions. Infomaniak uses
exclusively renewable energy, builds its own data centers and develops its
solutions in Switzerland, without relocating. The company powers the
website of the Belgian radio and TV service (RTBF) and provides streaming
for more than 3,000 TV and radio stations in Europe.
About Proxmox
Proxmox provides powerful and user-friendly
Open Source server software. Enterprises of all sizes and industries use
Proxmox solutions to deploy efficient and simplified IT infrastructures,
minimize total cost of ownership, and avoid vendor lock-in. Proxmox also
offers commercial support, training services, and an extensive partner
ecosystem to ensure business continuity for its customers. Proxmox Server
Solutions GmbH was established in 2005 and is headquartered in Vienna,
Austria. Proxmox builds its product offerings on top of the Debian
operating system.
About Wind River
Wind River For nearly 20 years, Wind River
has led in commercial Open Source Linux solutions for mission-critical
enterprise edge computing. With expertise across aerospace, automotive,
industrial, telecom, and more, the company is committed to Open Source
through initiatives like eLxr, Yocto, Zephyr, and StarlingX.
This month I accepted 502 and rejected 40 packages. The overall number of packages that got accepted was 515.
In case you want to upload dozens of packages, it would be nice to give some heads-up before. It is kind of a shock to see a full NEW queue in the morning, though it was much shorter in the evening before.
Debian LTS
This was my hundred-twenty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
[#1074439] bookworm-pu: cups 2.4.2-3+deb12u7 has been marked for accept
This month I finished the new version of tiff for Bullseye (and Bookworm). The upload will follow, when Bullseye has been handed over to the LTS team in August.
Last but not least I attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-second ELTS month. During my allocated time I uploaded or worked on:
[ELA-1126-1-1]exim4 security update for one CVE. This was the delayed ELA I mentioned in my last report.
[ELA-1144-1-1]exim4 security update for one CVE to fix parsing of multiline RFC 2231 header filenames in Stretch and Buster. Jessie was not affected by this issue.
Uploaded new versions of tiff for Jessie and Stretch that got stuck in the autopkgtests.
For whatever reason, I had trouble with the CI again. The new tiff package wanted to run the autopkgtest of cups but never did it. So the corresponding ELA will appear only in August.
I also continued to work on an update for libvirt. There really is a reason why some packages don’t get much attention. Nevertheless someone has to take care of them. I also did a week of FD and attended the LTS/ELTS meeting.
A relative also has one which is working well for them but which had some problems, I only discovered that holding the button down for a long time (longer than usual for device reset) makes a PineTime reboot because of their issues. I also once had their device get into a bad state where the only thing I could do was flash a newer firmware which fortunately fixed the problem.
My latest issue is the battery life. Recently it has been taking ages to get above about 90% charge when charging and the time taken to go down to ~70% when I charge it seems to be decreasing. Yesterday it suddenly went to 13% after being 73% the previous night. Then it stayed at 13% all day. It seems quite inaccurate. But also it doesn’t seem to be lasting as long as before.
Generally it seems to me that Pine64 products are almost great. I won’t rule out the possibility of a newer firmware for the PineTime alleviating the battery issues (or at least reporting the status accurately) and making Bluetooth connectivity more reliable (even on older phones). For the PinePhonePro an update to Mobian could reduce power wasting from user space (there’s an issue that I have reported in Plasma Mobile but no-one is interested on working on this before KDE 6), and a kernel update could improve things. But I don’t think there’s a possibility of it ever having the battery last a day while polling Matrix and Jabber servers which is something that every Android phone can do without problems.
The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a
24-year-old Punjabi woman, who was a simple, homely girl that was always
reliant on her family. Similar to Rani I too rarely ventured out without my
parents and often needed my younger sibling by my side. Inspired by her
transformation, I decided it was time to take control of my own story and
discover who I truly am.
Trip Requirements
My First Passport
The journey began with a significant first step: Obtaining my first
passport❗️
Never having had one before, I scheduled the nearest available interview
date on June 29 2022. This meant traveling to Solapur, a city 309 km from my
hometown, accompanied by my father. After successfully completing the
interview, I received my passport on July 14 2022.
Select A Country, Booking Flights And Accommodation
Excited and ready to embark on my adventure, I planed trip to Albania 🇦🇱 and
booked the flight tickets. Why? I had heard from friends that it was a
beautiful European country with beaches and other attractions, and
importantly, it didn’t require a visa for Indian citizens and was more
affordable than other European destinations. Before heading to Albania, I
planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend
who knew the process for obtaining it.
Some of my friends did travel also to Europe at the same time and quite
close to my plannings, but that I realized just later the trip. 😉
Day 1, Starting The Experience
On July 20, 2022, I started my journey by traveling from Pune, Maharashtra,
to Delhi, where my brother lives. He came to see me off at the airport,
adding a touch of warmth and support to the beginning of my solo adventure.
Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed
at a backpacker hostel named Zostel, Paharganj, Delhi to rest.
During my stay, I noticed that many travelers at the hostel carried
rucksacks, which sparked a desire in me to get one for my own trip to
Europe. Up until then, I had always shopped with my mom and had never bought
anything on my own. Inspired by the travelers, I set out to find a suitable
rucksack. I traveled alone by metro from Paharganj to Rohini to visit a
Decathlon store, where I purchased a 50-liter rucksack. This was a
significant step in preparing for my European adventure and marked a
milestone in my journey of self reliance.
Day 2, Flying To Abu Dhabi
The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the
night at the hostel to rest before my journey. On the day of the flight, I
needed to reach the airport by 3 PM, and a friend kindly came to drop me
off. With my rucksack packed and excitement building, I was ready for the
next leg of my adventure.
When we arrived at the airport, my friend saw me off, marking the start of
my international journey. With mom made spices, chutneys, and chilly flakes
packed for comfort, I completed my immigration process in about two and a
half hours. I then settled at the gate for my flight, feeling a mix of
excitement and anxiety as thoughts raced through my mind.
To ease my nerves, I struck up a conversation with a man seated nearby who was
also traveling to Abu Dhabi for work. He provided helpful information about
safety and transportation in Abu Dhabi, which reassured me. With the boarding
process complete and my anxiety somewhat eased. I found my window seat on the
flight and settled in, excited for the journey ahead. Next to me was a young
man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining
factory. We had an engaging conversation about work culture in Abu Dhabi and
recruitment from India.
Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and
began finding my way to the hotel Premier Inn AbuDhabi,
which was in the airport area. To my surprise, I ran into the same man from the
flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly
accepted since navigating an unfamiliar city with a short acquaintance felt
safer.
At the hotel gate, he asked if I had local currency (Dirhams) for payment,
as sometimes online transactions can fail. That hadn’t crossed my mind, and
I realized I might be left stranded if a transaction failed. Recognizing his
help as a godsend, I asked if he could lend me some Dirhams, promising to
transfer the amount later. He kindly assured me to pay him back once I
reached the hotel room. With that relief, I checked into the hotel, feeling
deeply grateful for the unexpected assistance and transferred the money to
him after getting to my room.
Day 3, Flying And Arrive In Tirana
Once in the hotel room, I found it hard to sleep, anxious about waking up on
time for my flight. I set an alarm to wake up early, but my subconscious mind
kept me alert, and I woke up before the alarm went off. I got freshened up and
went down for breakfast, where I found some vegetarian options like
Idli-Sambar and bread with butter, along with some morning tea.
After breakfast, I headed back to the airport, ready to catch my flight to my
final destination: Tirana, Albania.
I reached Tirana, Albania after a six hours flight, feeling exhausted and I was
suffering from a headache. The air pressure had blocked my ears, and jet lag
added to my fatigue. After collecting my checked luggage, I headed to the first
ATM machine at the airport. Struggling to insert my card, I asked a nearby
gentleman for help. He tried his best, but my card got stuck inside the
machine. Panic 🥵 set in as I worried about how I would survive without money.
Taking a deep breath, I found an airport employee and explained the situation.
The gentleman stayed with me, offering support and repeatedly apologizing for
his mistake. However, it wasn’t his fault, the ATM was out of order, which I
hadn’t noticed. My focus was solely on retrieving my ATM card. The airport
employee worked diligently, using a hairpin to carefully extract my card.
Finally, the card was freed, and I felt an immense sense of relief, grateful
for the help of these kind strangers. I used another ATM, successfully withdrew
money, and then went to an airport mobile SIM shop to buy a new SIM card for
local internet and connectivity.
Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country
I had booked a stay at a backpacker hostel near the city center of Tirana.
After sorting out the ATM and SIM card issues, I searched for a bus or any
transport to get there. It was quite late, around 8:30 PM, and being in a
new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped
it, and asked if it went to the city center. They gave me the green flag, so
I boarded the airport service bus and reached the city center.
Feeling very tired, I discovered that the hostel was about an hour and a
half away by walking. Deciding to take a cab, I faced a challenge as the
driver couldn’t understand my English or accent. Using a mobile translator
to convert my address from English to Albanian, I finally communicated my
destination to him. With that sorted out, I headed to the
Blue Door Backpacker Hostel and arrived around 9 PM,
relieved to have finally reached my destination and I checked in.
I found my top bunk bed, only to realize I had booked a mixed-gender
dormitory. This detail had completely escaped my notice during the booking
process. I felt unsure about how to handle the situation. Coincidentally,
my experience mirrored what Kangana faced in the movie “Queen”.
Feeling acidic due to an empty stomach and the exhaustion of heavy
traveling, I wasn’t up to cooking in the hostel’s kitchen.
I asked the front desk about the nearest restaurant. It was nearly 9:30 PM,
and the streets were deserted. To avoid any mishaps like in the movie
“Queen,” I kept my passport securely locked in my bag, ensuring it wouldn’t
be a victim of theft.
Venturing out for dinner, I felt uneasy on the quiet streets. I eventually
found a restaurant recommended by the hostel, but the menu was almost
entirely non-vegetarian. I struggled to ask about vegetarian options and was
uncertain if any dishes contained eggs, as some people consider eggs to be
vegetarian. Feeling frustrated and unsure, I left the restaurant without
eating.
I noticed a nearby grocery store that was about to close and managed to get
a few extra minutes to shop. I bought some snacks, wafers, milk, and tea
bags (though I couldn’t find tea powder to make Indian-style tea). Returning
to the hostel, I made do with wafers, cookies, and milk for dinner. That day
was incredibly tough for me, I filled with exhaustion and struggle in a new
country, I was on the verge of tears 🥹.
I made a video call home before sleeping on the top bunk bed. It was a new
experience for me, sharing a room with both unknown men and women. I kept my
passport safe inside my purse and under my pillow while sleeping, staying
very conscious about its security.
Day 5, Exploring Nearby Places
I woke up the next day at noon. After having some coffee, the hostel
management girl asked if I wanted breakfast. She offered curd with
cornflakes, which I refused because I don’t like curd. Instead, I ordered a
pizza from a vegetarian pizza place with her help, and I started feeling
better.
I met some people in the hostel, some from Syria and others from Italy. I
struggled to understand their accents but kept pushing myself to get
involved in their discussions. Despite the challenges, I felt more at ease
and was slowly adapting to my new environment.
I went out from the hostel in the evening to buy some vegetables to cook
something. I searched for shops and found some potatoes, tomatoes, and rice. I
decided to cook Khichdi, an Indian dish made with rice, and added
some chili flakes I brought from home. After preparing my dinner, I ate and
then went to sleep again.
Day 6, Tiranas Recent History
The next day, I planned to explore the city and visited Bunkart-1,
a fascinating museum in a massive underground bunker from the communist era.
Originally built as a shelter for Albania’s political and military elite, it
now offers a unique glimpse into the country’s history under Enver Hoxha’s
oppressive regime. The museum’s exhibits include historical artifacts,
photographs, and multimedia displays that detail the lives of Albanians during
that time. Walking through the dimly lit corridors, I felt the weight of
history and gained a deeper understanding of Albania’s past.
Day 7-8, Meeting Friends From India
The next day, I accidentally met with Chirag, who was returning from the
Debian Conference 2022 held in Prizren, Kosovo, and staying at the same
hostel. When I encountered him, he was talking on the phone, and I
recognized he was Indian by his accent. I introduced myself, and we
discovered we had some mutual friends.
Chirag told me that our common friend, Raju, was also coming to stay at the
hostel the next day. This news made me feel relaxed and happy to have known
people around. When Raju arrived, the three of us, Chirag, Raju, and
I planned to have dinner at an Indian restaurant and explore Tirana city. I
had a great time talking and enjoying their company.
Day 9-10, Meeting More Friends
Raju had a ticket to leave soon, so Chirag and I made a plan to visit
Shkodër and the nearby Komani Lake for kayaking. We started our journey
early in the morning by bus and reached Shkodër. There, we met new friends
from the conference, Pavit and Abraham, who were already there. We had
dinner together and enjoyed an ice cream treat from Chirag.
Day 12, Kayaking And Say Good Bye To Friends
The next day, Pavit and Abraham had a flight back to India, so Chirag and I
went to Komani Lake. We had an adventurous time kayaking, even though
neither of us knew how to swim. We took a ferry through the backwaters to
the island on Komani Lake and enjoyed a fantastic adventure together. After
our trip, Chirag returned to Tirana for his flight back to India, leaving me
to continue my journey alone.
Day 13, Climbing Rozafa Castel
By stopping at Shkodër, I visited Rozafa Castle. Despite the language
barrier, as most locals only spoke Albanian, people around me guided me
correctly on how to get there. At times, I used applications like Google
Translate to communicate. To read signs or hotel menus, I used Google
Photos' language converter. I even used the audio converter to understand
and speak some basic Albanian phrases.
I took a bus from Shkodër to the southern part of Albania, heading to
Sarandë. The journey lasted about five to six hours, and I had booked a stay
at Mona’s Hostel. Upon arrival, I met Eliza from America, and we went
together to Ksamil Beach, spending a wonderful day there.
Day 14, Vlora Beach: Beach Side Cycling
Next, I traveled to Vlorë, where I stayed for one day. During my time there, I
enjoyed beach side cycling with a cycle provided by the hostel owner and spent
some time feeding fish. I also met a fellow traveler from Delhi who had brought
along some preserved Indian curry. He kindly shared it with me, which was a
welcome change after nearly 15 days without authentic Indian cuisine, except
for what I had cooked myself in various hostels.
Day 15-16 Visiting Durress, Travelling Back To Tirana
I then visited Durrës, exploring its beautiful beaches, before heading back
to Tirana one day before my flight home. On the day of my flight, my alarm
didn’t go off, and I woke up late at the hostel. In a frantic rush, I packed
everything in just five minutes and dashed toward the city center to catch
the bus to the airport. If I had been just five minutes later, I would have
missed the bus. Thankfully, I managed to stop it just in time and began my
journey back home, reflecting on the incredible adventure I had experienced.
Fortunately, I wasn’t late; I arrived at the airport just in time. After
clearing immigration, I boarded my flight, which had a layover in Warsaw,
Poland. The journey from Tirana to Warsaw took about two and a half hours,
followed by a seven to eight-hour flight from Poland back to India. Once I
arrived in Delhi, I returned to Zostel and booked a train ticket to
Aurangabad for the next three days.
Backview 😄
This trip was an incredible adventure for me. I never imagined I could
accomplish something like this, but I did. Meeting diverse people,
experiencing different cultures, and learning so much made this journey
truly unforgettable.
Looking back, I realize how much I’ve grown from this experience. Although I
may have more opportunities to travel abroad in the future, this trip will
always hold a special place in my heart. The memories I made and the
incredible people I met along the way are irreplaceable.
This experience goes beyond what I can express through this blog or words;
it was incredibly precious to me. Every moment of this journey is etched in
my memory, and I am grateful for every part of it.
I care about performance, and I care about benchmarking. So it really
annoys me when people throw out stuff like “this is 0.3% faster so it's
a win”, without saying anything about the uncertainty in their benchmark
estimates.
Turns out this is actually a fairly hard problem; since performance
is essentially sum(before) / sum(after) and dividing anything by anything
is rarely well-behaved in statistics. So the best I see is usually
something like “worst and best we've seen”, which isn't… all that useful?
So at work, I coded up an implementation of the statistical bootstrap,
based on some R code I've used for a while. It gives reasonable 95% and 99%
confidence intervals of unpaired data, without relying on assumptions of
normality (including via the central limit theorem); here's a set of
benchmarks I ran recently over an optimization, as an example:
The program itself is geared towards interpreting a Chromium-specific
output format (it is not a test runner), but the actual statistics
code is encapsulated in a class with no other dependencies than a PRNG,
a simple sorter and a math library, so it should be simple to port to other languages
and environments. Like the rest of Chromium, it is liberally licensed.
The diffoscope maintainers are pleased to announce the release of diffoscope
version 274. This version includes the following changes:
[ Chris Lamb ]
* Add support for IO::Compress::Zip >= 2.212. (Closes: #1078050)
* Don't include debug output when calling dumppdf(1).
* Append output from dumppdf(1) in more cases.
(Closes: reproducible-builds/diffoscope#387)
* Update copyright years.
[ Mattia Rizzolo ]
* Update the available architectures for test dependencies.
In our reports, we outline what we’ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.
Last month, we were very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 17th — 19th 2024 in Hamburg, Germany. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.
If you’re interesting in joining us this year, please make sure to read the event page, which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.
In a recent edition of Linux Weekly News, Daroc Alden has written an article on “bootstrappable” builds. Starting with a brief introduction that…
… a bootstrappable build is one that builds existing software from scratch — for example, building GCC without relying on an existing copy of GCC. In 2023, the Guix project announced that the project had reduced the size of the binary bootstrap seed needed to build its operating system to just 357-bytes — not counting the Linux kernel required to run the build process.
The article goes onto to describe that “now, the live-bootstrap project has gone a step further and removed the need for an existing kernel at all.” and concludes:
The real benefit of bootstrappable builds comes from a few things. Like reproducible builds, they can make users more confident that the binary packages downloaded from a package mirror really do correspond to the open-source project whose source code they can inspect. Bootstrappable builds have also had positive effects on the complexity of building a Linux distribution from scratch […]. But most of all, bootstrappable builds are a boon to the longevity of our software ecosystem. It’s easy for old software to become unbuildable. By having a well-known, self-contained chain of software that can build itself from a small seed, in a variety of environments, bootstrappable builds can help ensure that today’s software is not lost, no matter where the open-source community goes from here
Trisquel developer Simon Josefsson wrote an interesting blog post comparing the output of the .deb files from our tests.reproducible-builds.org testing framework and the ones in the official Debian archive. Following up from a previous post on the reproducibility of Trisquel, Simon notes that “typically [the] rebuilds do not match the official packages, even when they say the package is reproducible”, Simon correctly identifies that “the purpose of [these] rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.”
However, Simon’s post swiftly moves on to announce a new tool called debdistrebuild that performs rebuilds of the difference between two distributions in a GitLab pipeline and displays diffoscope output for further analysis.
Mehdi Keshani, Tudor-Gabriel Velican, Gideon Bot and Sebastian Proksch of the Delft University of Technology, Netherlands, have published a new paper in the ACM Software Engineering on a new tool to automatically reproduce Apache Maven artifacts:
Reproducible Central is an initiative that curates a list of reproducible Maven libraries, but the list is limited and challenging to maintain due to manual efforts. [We] investigate the feasibility of automatically finding the source code of a library from its Maven release and recovering information about the original release environment. Our tool, AROMA, can obtain this critical information from the artifact and the source repository through several heuristics and we use the results for reproduction attempts of Maven packages. Overall, our approach achieves an accuracy of up to 99.5% when compared field-by-field to the existing manual approach [and] we reveal that automatic reproducibility is feasible for 23.4% of the Maven packages using AROMA, and 8% of these packages are fully reproducible.
Nichita Morcotilo reached out to the community, first to share their efforts “to build reproducible packages cross-platform with a new build tool called rattler-build, noting that “as you can imagine, building packages reproducibly on Windows is the hardest challenge (so far!)”. Nichita goes onto mention that the Apple ecosystem appears to be using ZERO_AR_DATE over SOURCE_DATE_EPOCH. […]
Roland Clobus announced that the Debian bookworm 12.6 live images are “nearly reproducible”, with more detail in the post itself and input in the thread from other contributors.
Daniel Gröber asked for help in getting the Yosys documentation to build reproducibly, citing issues in inter alia the PDF generation causing differing CreationDate metadata values.
James Addison continued his long journey towards getting the Sphinx documentation generator to build reproducible documentation. In this thread, James concerns himself with the problem that even “when SOURCE_DATE_EPOCH is configured, Sphinx projects that have configured their copyright notices using dynamic elements can produce nonsensical output under some circumstances.” James’ query ended up generating a number of replies.
Allen ‘gunner’ Gunner posted a brief update on the progress the core team is making towards introducing a Code of Conduct (CoC) such that it is “in place in time for the RB Summit in Hamburg in September”. In particular, gunner asks “if you are interested in helping with CoC design and development in the weeks ahead, simply email rb-core@lists.reproducible-builds.org and let us know”. […]
[S]oftware repositories are a vital component of software development and release, with packages downloaded both for direct use and to use as dependencies for other software. Further, when software is updated due to patched vulnerabilities or new features, it is vital that users are able to see and install this patched version of the software. However, this process of updating software can also be the source of attack. To address these attacks, secure software update systems have been proposed. However, these secure software update systems have seen barriers to widespread adoption. The Update Framework (TUF) was introduced in 2010 to address several attacks on software update systems including repository compromise, rollback attacks, and arbitrary software installation. Despite this, compromises continue to occur, with millions of users impacted by such compromises. My work has addressed substantial challenges to adoption of secure software update systems grounded in an understanding of practical concerns. Work with industry and academic communities provided opportunities to discover challenges, expand adoption, and raise awareness about secure software updates. […]
Fay Stegerman performed some in-depth research surrounding her apksigcopier tool, after some Android .apk files signed with the latest apksigner could no longer be verified as reproducible. Fay identified the issue as follows:
Since build-tools >= 35.0.0-rc1, backwards-incompatible changes to apksigner break apksigcopier as it now by default forcibly replaces existing alignment padding and changed the default page alignment from 4k to 16k (same as Android Gradle Plugin >= 8.3, so the latter is only an issue when using older AGP). […]
Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded version 272 and Mattia Rizzolo uploaded version 273 to Debian, and the following changes were made as well:
Chris Lamb:
Ensure that the convert utility is from ImageMagick version 6.x. The command-line interface has seemingly changed with the 7.x series of ImageMagick. […]
Factor out version detection in test_jpeg_image. […]
Correct the import of the identify_version method after a refactoring change in a previous commit. […]
Move away from using DSA OpenSSH keys in tests as support has been deprecated and removed in OpenSSH version 9.8p1. […]
Move to assert_diff in the test_openssh_pub_key package. […]
There were a number of improvements made to our website this month, including:
Bernhard M. Wiedemann updated the SOURCE_DATE_EPOCH page to include instructions on how to create reproducible .zip files from within Python using the zipfile module. […]
Chris Lamb fixed a potential duplicate heading on the Projects page. […]
Fay Stegerman added rbtlog to the Tools page […] and IzzyOnDroid to the Projects page […], also ensuring that the latter page was always sorted regardless of the ordering within the input data files. […]
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In July, a number of changes were made by Holger Levsen, including:
Perform a dummy change to force update of all jobs. […][…]
In addition, Vagrant Cascadian performed some necessary node maintenance of the underlying build hosts. […]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
I’m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn’t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would.
In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it’s nice to be excited about some ideas again that would otherwise be fading.
Recovering DPL
It’s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself “it’s not your problem anymore” and let others deal with it.
It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August.
During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide “Maybe”. I think it’s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything.
Also, having been DPL for four years, I have a lot to say about it, and I think there’s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I’ll very likely do that for the next DebConf instead!
Video team
I set up the standby loop for the video streaming setup. We call it loopy, it’s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it’s evolved quite a bit since then, so I’m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I’ll summarize everything before then.
It would’ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year’s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress…).
I added more of the OBS machine setup to the videoteam’s ansible repository, so as of now it just needs an ansible setup and the OBS data and it’s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks.
This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn’t have much time for in recent DebConfs, and it’s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)
Baekyongee is the university mascot, who’s visible throughout the university. So of course we included this four legged whale creature on the loop too!
Packaging
I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:
Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) – update: I’m probably going to ROM this package soon, it doesn’t seem suitable for packaging in Debian.
Calamares: Tested a fix for encrypted installs, and uploaded it.
Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it’s still in backports-NEW).
Backport obs-gradient-source for bookworm.
Did some initial packaging on Cambalache, I’ll upload to unstable once gtk-4-dev (4.14.0) is in unstable (currently in experimental).
Pixelorama 1.0 – I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we’re still on the 3 series in Debian, so I’ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it’s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).
The session ended up being extended to a second part, since all the issues didn’t fit into the first session.
I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition.
There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled.
I learned for the first time about the “dead batteries” project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library.
There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal).
There’s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers.
Louis-Philipe Véronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)
Debian Developers can set up services on subdomains on debian.net, but a big problem we’ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian.
We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don’t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we’ll be doing that again.
We also gained two potential volunteers to help run things, which is great.
Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we’ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features.
Pixelfed is doing well and chugging along nicely, we should probably promote it more.
Peertube is working well, although we learned that we still don’t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up.
We’re removing writefreely and plume. Nice concepts, but it didn’t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not.
The WordPress multisite instance has some mild use, otherwise haven’t had any issues.
Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It’s very stateful and remembers discussions for as long as you let it, so it’s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host.
Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn’t have the debian.net hosting yet), although in practice it doesn’t really matter where it lives.
Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we’ll be doing that shortly.
Debian Foundation Discussion BoF
This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It’s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it’s DebConf again.
I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I’m pursuing this with the Debian Partners team, more on that some other time.
This session wasn’t recorded, but if you feel like you missed something, don’t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn’t make progress on formalising Debian’s structure more.
Social
Conference Dinner
Conference Dinner Photo from Santiago
The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat.
Cheese and Wine
Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montréal seven years before, so this was a nice accidental celebration of my Debiversary :)
Since I’m running out of time, I’ll add some more photos to this post some time after publishing it :P
Group Photo
As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian’s GitLab instance.
Debian annual conference Debconf 24, Busan, South Korea Photography: Aigars Mahinovs aigarius@debian.org License: CC-BYv3+ or GPLv2+
Talking
Ah yes, talking to people is a big part of DebConf, but I didn’t keep track of it very well.
I mostly listened to Alper a bit about his ideas for his talk about debian installer.
I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
We had the usual continuous keysigning party. Besides it’s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
… and many other fly-by discussions.
Stuff that didn’t happen this DebConf
loo.py – A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we’d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn’t have the time.
toetally – This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would’ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I’ll try to have something ready for the next mini conf though.
extrepo on debian live – I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it’s just time I go ahead and do it and announce it.
Live stream to peertube server – It would’ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
Desktop Egg – there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It’s based on Juliette Taka’s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn’t hatched yet? I also only recently noticed that we never used the actualhomeworld graphics (featuring the world image) in the final bullseye release. lol.
So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!
DebConf24 is now over! I'm very happy I was able to attend this year. If
you haven't had time to look at the schedule yet, here is a
selection of talks I liked.
What happens if I delete setup.py?: a live demo of upgrading to PEP-518 Python packaging
A great talk by Weezel showcasing how easy it is to migrate to PEP-518 for
existing Python projects.
This is the kind of thing I've been doing a lot when packaging upstream
projects that still use setup.py. I encourage you to send this kind of patch
upstream, as it makes everyone's life much easier.
Debian on Chromebooks: What's New and What's Next?
A talk by Alper Nebi Yasak, who has done great work on running Debian and the
Debian Installer on Chromebooks.
With Chromebooks being very popular machines in schools, it's nice to see people
working on a path to liberate them.
Sequoia PGP, sq, gpg-from-sq, v6 OpenPGP, and Debian
I had the chance to see Justus' talk on Sequoia — an OpenPGP implementation in
Rust — at DebConf22 in Kosovo. Back then, the conclusion was that
sq wasn't ready for production yet.
Well it seems it now is! This in-depth talk goes through the history of the
project and its goals. There is also a very good section on the current
OpenPGP/LibrePGP schism.
Chameleon - the easy way to try out Sequoia - OpenPGP written in Rust
A very short talk by Holger on Chameleon, a tool to make migration to Sequoia
easier.
TL;DW: apt install gpg-from-sq
Protecting OpenPGP keyservers from certificate flooding
Although I used to enjoy signing people's OpenPGP keys, I completely gave up on
this practice around 2019 when dkg's key was flooded with bogus
certifications and have been refusing to do so since.
In this talk, Gunnar talks about his PhD work on fixing this issue and making
sure we can eventually restore this important function on keyservers.
Bits from the DPL
Bits from the DPL! A DebConf classic.
Linux live patching in Debian
Having to reboot servers after kernel upgrades is a hassle, especially with
machines that have encrypted disk drives.
Although kernel live patching in Debian is still a work in progress, it is
encouraging to see people trying to fix this issue.
"I use Debian BTW": fzf, tmux, zoxide and friends
A fun talk by Samuel Henrique on little changes and tricks one can make to
their setup to make life easier.
Ideas to Move Debian Installer Forward
Another in-depth talk by Alper, this time on the Debian Installer and his ideas
to try to make it better. I learned a lot about the d-i internals!
Lightning Talks
Lighting talks are always fun to watch! This year, the following talks happened:
Customizing your Linux icons
A Free Speech tracker by SFLC.IN
Desktop computing is irrelevant
An introduction to wcurl
Aliasing in dpkg
A DebConf art space
Tiny Tapeout, Fomu, PiCI
Data processing and visualisation in the shell
Is there a role for Debian in the post-open source era?
As an economist, I've been interested in Copyright and business models in the
Free Software ecosystem for a while. In this talk, Hatta-san and Bruce Perens
discuss the idea of alternative licences that are not DFSG-free, like
Post-Open.
One of our newest servers, with a hefty 256GB of RAM, recently began killing
processes via the oomkiller.
According to free, only half of the RAM was in use (125GB). About 4GB was
free, with the remainer used by the file cache.
I’m used to seeing unexpected “free RAM” numbers like this and have been
assured that the kernel is simply not wasting RAM. If it’s not needed, use it
to cache files to save on disk I/O. That make sense.
However… why is the oomkiller being called instead of flushing the file
cache?
I came up with all kinds of amazing and wrong theories: maybe the RAM is
fragmented (is that even a thing?!?), maybe there is a spike in RAM and the
kernel can’t flush the cache quickly enough (I really don’t think that’s a
thing). Maybe our kvm-manager has a weird bug (nope, but that didn’t stop me
from opening a spurious bug
report).
I learned lots of cool things, like the oomkiller report includes a
table of the memory in use by each process (via the rss column) - and you
have to muliply that number by 4096 because it’s in 4K pages.
That’s how I discovered that the oomkiller was killing off processes with only
half the memory in use.
I also learned that lsof sometimes lists the same open file multiple times,
which made me think a bunch of files were being opened repeatedly causing a
memory problem, but really it amounted to nothing.
That last thing I learned, courtesy of an askubuntu
post is that
the /dev filesystem is allocated by default exactly half the RAM on the
system. What a coincidence! That is exactly how much RAM is useable on the
server.
And, on the server in question, that filesystem is full. What?!? Normally, that
filesystem should be using 0 bytes because it’s not a real filesystem. But in
our case a process created a 127GB file there - it was only stopped because the
file system filled up.
It's just a very tiny difference, but hopefully a big step forward for
our users. Our main download web page (which still uses the URL
https://www.debian.org/distrib/)
now has the title "Download Debian". Hopefully this will improve the
results in the search engines.
A brief history of this web page in time
1998: The title "Distribution" was added
2002: Title changed to "Getting Debian"
2024: Finally changed to "Download Debian"
Here are the screenshots of these three versions.
I like that we had a selection menu on the top right corner to
select a mirror for downloading in the past.
A few days ago I've also removed the info
"Internal ISDN cards are unfortunately not supported." from the
netinst subpage. Things are moving forward, but slowly.
Most banks are behind CDNs and DDoS mitigation providers nowadays, though they still hold their own IP space. Was interested in this, so compiled a list from BGP.Tools and Hurricane Electric BGP Toolkit.
The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier.
Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can’t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later.
So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow.
DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now.
Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough…) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but:
If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it!
Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look!
Thankfully no tragedies to report this week! I thank each and everyone of you that has donated to my car fund. I still have a ways to go and could use some more help so that we can go to the funeral. https://gofund.me/033eb25d I am between contracts and work packages, so all of my work is currently for free. Thanks for your consideration.
Another very busy week getting qt6 updates in Debian, Kubuntu, and KDE snaps.
Kubuntu:
Merkuro and Neochat SRUs have made progress.
See Debian for the qt6 Plasma / applications work.
Debian:
qtmpv – in NEW
arianna – in NEW
kamera – experimental
libkdegames – experimental
kdenetwork-filesharing – experimental
xwaylandvideobridge – NEW
futuresql – NEW
kpat WIP
Tokodon – Done, but needs qtmpv to pass NEW
Gwenview – WIP needs kamera, kio-extras
kio-extras – Blocked on kdsoap in which the maintainer is not responding to bug reports or emails. Will likely fork in Kubuntu as our freeze quickly approaches.
KDE Snaps:
Updated QT to 6.7.2 which required a rebuild of all our snaps. Also found an issue with mismatched ffmpeg libraries, we have to bundle them for now until versioning issues are resolved.
Made new theme snaps for KDE breeze: gtk-theme-breeze, icon-theme-breeze so if you use the plasma theme breeze please install these and run
for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-breeze:icon-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-2-themes; done
This should resolve most theming issues. We are still waiting for kdeglobals to be merged in snapd to fix colorscheme issues, it is set for next release. I am still working on qt6 themes and working out how to implement them in snaps as they are more complex than gtk themes with shared libraries and file structures.
Please note: Please help test the –edge snaps so I can promote them to stable.
this are my bits from DPL written at my last day at another great
DebConf.
DebConf attendance
At the beginning of July, there was some discussion with the bursary and
content team about sponsoring attendees. The discussion continued at
DebConf.
I do not have much experience with these discussions. My summary is that
while there is an honest attempt to be fair to everyone, it did not seem to
work for all, and some critical points for future discussion remained. In
any case, I'm thankful to the bursary team for doing such a time-draining
and tedious job.
Popular packages not yet on Salsa at all
Otto Kekäläinen did some interesting investigation about
Popular packages not yet on Salsa at all.
I think I might provide some more up to date list soon by some UDD query
which considers more recent uploads than the trends data soon. For
instance wget was meanwhile moved to Salsa (thanks to Noël Köthe for this).
Keep on contacting more teams
I kept on contacting teams in July. Despite I managed to contact way
less teams than I was hoping I was able to present some conclusions in
the Debian Teams exchange BoF and Slide 16/23 of my
Bits from the DPL talk.
I intend to do further contacts next months.
Nominating Jeremy Bícha for GNOME Advisory Board
I've nominated Jeremy Bícha to
GNOME Advisory Board.
Jeremy has volunteered to represent Debian at
GUADEC in Denver.
DebCamp / DebConf
I attended DebCamp starting from 22 July evening and had a lot of fun with
other attendees. As always DebConf is some important event nearly
every year for me. I enjoyed Korean food, Korean bath, nature at the
costline and other things.
I used DebCamp and DebConf for several discussions. My main focus was on
discussions with FTP master team members Luke Faraone, Sean Whitton, and
Utkarsh Gupta. I'm really happy that the four of us absolutely agree on
some proposed changes to the structure of the FTP master team, as well
as changes that might be fruitful for the work of the FTP master team
itself and for Debian developers regarding the processing of new
packages.
My explicit thanks go to Luke Faraone, who gave a great introduction to
FTP master work in their
BoF.
It was very instructive for the attending developers to understand how the
FTP master team checks licenses and copyright and what workflow is used
for accepting new packages.
In the first days of DebConf, I talked to representatives of DebConf
platinum sponsor WindRiver, who announced the derivative
eLxr. I warmly welcome
this new derivative and look forward to some great cooperation. I also
talked to the representative of our gold sponsor, Microsoft.
My first own event was the
Debian Med BoF.
I'd like to repeat that it might not only be interesting for people
working in medicine and microbiology but always contains some hints how
to work together in a team.
As said above I was trying to summarise some first results of my team
contacts and got some further input from other teams in the
Debian Teams exchange BoF.
Finally, I had my
Bits from DPL talk.
I received positive responses from attendees as well as from remote
participants, which makes me quite happy. For those who were not able to
join the events on-site or remotely, the videos of all events will be
available on the DebConf site soon. I'd like to repeat the explicit need
for some volunteers to join the Lintian team. I'd also like to point out
the "Tiny tasks" initiative I'd like to start (see below).
BTW, if someone might happen to solve my quiz for the background images
there is a
summary page
in my slides which might help to assign every slide to some DebConf. I
could assume that if you pool your knowledge you can solve more than just
the simple ones. Just let me know if you have some solution. You can add
numbers to the rows and letters to the columns and send me:
2000/2001: Uv + Wx 2002: not attended 2003: Yz 2004: not attended 2005: 2006: not attended 2007: ... 2024: A1
This list provides some additional information for DebConfs I did not
attend and when no video stream was available. It also reminds you about
the one I uncovered this year and that I used two images from 2001 since
I did not have one from 2000. Have fun reassembling good memories.
Tiny tasks: Bug of the day
As I mentioned in my
Bits from DPL talk,
I'd like to start a "Tiny tasks" effort within Debian. The first type of
tasks will be the
Bug of the day
initiative. For those who would like to join, please join the corresponding
Matrix channel. I'm
curious to see how this might work out and am eager to gain some initial
experiences with newcomers. I won't be available until next Monday, as I'll
start traveling soon and have a family event (which is why I need to leave
DebConf today after the formal dinner).
My Debian contributions this month were all
sponsored by Freexian.
You can also support my work directly via
Liberapay.
OpenSSH
At the start of the month, I uploaded a quick fix (via Salvatore Bonaccorso)
for a regression from
CVE-2006-5051,
found by
Qualys;
this was because I expected it to take me a bit longer to merge OpenSSH
9.8, which had the full fix.
This turned out to be a good guess: it took me until the last day of the
month to get the merge done. OpenSSH 9.8 included some substantial changes
to split the server into a listener binary and a per-session binary, which
required some corresponding changes in the GSS-API key exchange patch. At
this point I was very grateful for the GSS-API integration
test
contributed by Andreas Hasenack a little while ago, because otherwise I
might very easily not have noticed my mistake: this patch adds some entries
to the key exchange algorithm proposal, and on the server side I’d
accidentally moved that to after the point where the proposal is sent to the
client, which of course meant it didn’t work at all. Even with a failing
test, it took me quite a while to spot the problem, involving a lot of
staring at strace output and comparing debug logs between versions.
I contributed a change to allow maintaining Incus container and VM images
in
parallel.
I use both of these regularly (containers are faster, but some tests need
full machine isolation), and the build tools previously didn’t handle that
very well.
I now have a script that just does this regularly to keep my images up to
date (although for now I’m running this with PATH pointing to autopkgtest
from git, since my change hasn’t been released yet):
I reverted python-tenacity to an earlier version due to regressions in a
number of OpenStack packages, including
octavia and
ironic. (This seems to be due to
#486 upstream.)
I fixed a build failure in
python3-simpletal due to Python 3.12 removing the old imp module.
I added non-superficial autopkgtests to a number of packages, including
httmock, py-macaroon-bakery, python-libnacl, six, and storm.
I switched a number of packages to build using PEP
517 rather than calling setup.py
directly, including alembic, constantly, hyperlink, isort, khard,
python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working
through the
missing-prerequisite-for-pyproject-backend
Lintian tag, but there’s still lots to do.)
I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup,
python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and
uncertainties to new upstream versions. In the process, I added myself to
Uploaders for isort, since the previous primary uploader has
retired.
Other odds and ends
I applied a suggestion by Chris Hofstaedtler to create /etc/subuid and
/etc/subgid in base-passwd, since the
login package is no longer essential.
I fixed a wireless-tools regression due
to iproute2 dropping its (/usr)/sbin/ip compatibility symlink.
Debconf 24 is coming to a close in Busan, South Korea this year.
I thought that last year in India was hot. This year somehow managed to beat that. With 35C and high
humidity the 55 km that I managed to walk between the two conference buildings have really put the
pressure on. Thankfully the air conditioning in the talk rooms has been great and fresh water has been
plentiful. And the korean food has been excellent and very energetic.
The rest of my photos from the event will be published next week. That will give me a bit more time to process
them correctly and also give all of you a chance to see these pictures with fresh eyes and stir up new memories from
the event.
A short status update on what happened on my side last month. Looking
at unified push support for Chatty prompted some libcmatrix fixes and
Chatty improvements (benefiting other protocols like SMS/MMS as well).
The Bluetooth status page in Phosh was a slightly larger change code
wise as we also enhanced our common widgets for building status pages,
simplifying the Wi-Fi status page and making future status pages
simpler. But as usual investigating bugs, reviewing patches (thanks!)
and keeping up with the changing world around us is what ate most of
the time.
A new minor release 0.4.24 of RQuantLib
arrived on CRAN this afternoon
(just before the CRAN summer
break starting tomorrow), and has been uploaded to Debian too.
QuantLib is a rather
comprehensice free/open-source library for quantitative
finance. RQuantLib
connects (some parts of) it to the R environment and language, and has
been part of CRAN for more than
twenty-one years (!!) as it was one of the first packages I
uploaded.
This release of RQuantLib
follows the recent release from last
week which updated to QuantLib version 1.35 released that
week, and solidifies conditional code for older QuantLib versions in one source
file. We also updated and extended the configure source
file, and increased the mininum version of QuantLib to 1.25.
Changes in RQuantLib version 0.4.24 (2024-07-31)
Updated detection of QuantLib libraries in configure
The minimum version has been increased to QuantLib 1.25, and
DESCRIPTION has been updated to state it too
The dividend case for vanilla options still accommodates
deprecated older QuantLib versions if needed (up to QuantLib
1.25)
The configure script now uses PKG_CXXFLAGS and
PKG_LIBS internally, and shows the values it sets
I've tried Android Element app for matrix first time during Debconf. It feels good. For me it's a better IRC.
I've been using it on my Chromebook and one annoyance is that I haven't found a keyboard shortcut for sending messages.
I would have expected shift or ctrl with Enter would send the current message, but so far I have been touching the display to send messages.
Can I fix this? Where is the code?
Joining Debconf, it's been 16 years. I feel very different. Back then I didn't understand the need for people who were not directly doing Debian work, now I think I appreciate things more.
I don't remember what motivated me to do everything back then. Now I am doing what is necessary for me. Maybe it was back then too.
This is one of those posts that’s more for my own reference than likely to be helpful for others. If you’re unlucky it’ll have some useful tips for you. If I’m lucky then I’ll get a bunch of folk pointing out some optimisations.
First, what I’m trying to achieve. I want a virtual machine environment where I can manually do tests on random kernels, and also various TPM related experiments. So I don’t want something as fixed as a libvirt setup. I’d like the following:
It to be fairly lightweight, so I can run it on a laptop while usefully doing other things
I need a TPM 2.0 device to appear to the guest OS, but it doesn’t need to be a real TPM
Block device discard should work, so I can back it with a qcow2 image and use fstrim to keep the actual on disk size small, without constraining my potential for file system expansion should I need it
I’ve no need for graphics, in fact a serial console would be better as it eases copy & paste, especially when I screw up kernel changes
That turns out to be possible, but it took a bunch of trial and error to get there. So I’m writing it down. I generally do this on a Fedora based host system (FC40 at present, but this all worked with FC38 + FC39 too), and I run Debian 12 (bookworm) as the guest. At present I’m using qemu 8.2.2 and swtpm 0.9.0, both from the FC40 packages.
One other issue I spent too long tracking down is that the version of grub 2.06 in bookworm does not manage to pass the TPMEventLog through to the guest kernel properly. The events get measured and the PCRs updated just fine, but /sys/kernel/security/tpm0/binary_bios_measurements doesn’t even get created. Using either grub 2.06 from FC40, or the 2.12 backport in bookworm-backports, makes this work just fine.
Anyway, for reference, the following is the script I use to start the swtpm, and then qemu. The debugcon line can be dropped if you’re not interested in OVMF debug logging. This needs the guest OS to be configured up for a serial console, but avoids the overhead of graphics emulation.
As I said at the start, I’m open to any hints about other options I should be passing; as long as I get acceptable performance in the guest I care more about reducing host load than optimising for the guest.
Review: The Book That Wouldn't Burn, by Mark Lawrence
Series:
Library Trilogy #1
Publisher:
Ace
Copyright:
2023
ISBN:
0-593-43793-4
Format:
Kindle
Pages:
561
The Book That Wouldn't Burn is apparently high fantasy, but of the
crunchy sort that could easily instead be science fiction. It is the
first of a trilogy.
Livira is a young girl, named after a weed, who lives in a tiny settlement
in the Dust. She is the sort of endlessly curious and irrepressible girl
who can be more annoying than delightful to adults who are barely keeping
everyone alive. Her settlement is not the sort of place that's large
enough to have a name; only their well keeps them alive in the desert and
the ever-present dust. There is a city somewhere relatively near, which
Livira dreams of seeing, but people from the settlement don't go there.
When someone is spotted on the horizon approaching the settlement, it's
the first time Livira has ever seen a stranger. It's also not a good
sign. There's only one reason for someone to seek them out in the Dust:
to take. Livira and the other children are, in short order, prisoners of
the humanoid dog-like sabbers, being dragged off to an unknown fate.
Evar lives in the library and has for his entire life. Specifically, he
lives in a square room two miles to a side, with a ceiling so high that it
may as well be a stone sky. He lived there with his family before he was
lost in the Mechanism. Years later, the Mechanism spit him out alongside
four other similarly-lost kids, all from the same library in different
times. None of them had apparently aged, but everyone else was dead.
Now, years later, they live a strange and claustrophobic life with way too
much social contact between way too few people.
Evar's siblings, as he considers them, were each in the Mechanism with a
book. During their years in the Mechanism they absorbed that book until
it became their focus and to some extent their personality. His brothers
are an assassin, a psychologist, and a historian. His sister, the last to
enter the Mechanism and a refugee from the sabber attack that killed
everyone else, is a warrior. Evar... well, presumably he had a book,
since that's how the Mechanism works. But he can't remember anything
about it except the feeling that there was a woman.
Evar lives in a library in the sense that it's a room full of books, but
those books are not on shelves. They're stacked in piles and massive
columns, with no organizational system that any of them could discern.
There are four doors, all of which are closed and apparently impenetrable.
In front of one of them is a hundred yards of char and burned book
remnants, but that door is just as impenetrable as the others. There is a
pool in the center of the room, crops surrounding it, and two creatures
they call the Soldier and the Assistant. That is the entirety of Evar's
world.
As you might guess from the title, this book is about a library. Evar's
perspective of the library is quite odd and unexplained until well into
the book, and Livira's discovery of the library and subsequent
explorations are central to her story, so I'm going to avoid going into
too many details about its exact nature. What I will say is that I have
read a lot of fantasy novels that are based around a library, but I don't
think I've ever read one that was this satisfying.
I think the world of The Book That Wouldn't Burn is fantasy, in
that there are fundamental aspects of this world that don't seem amenable
to an explanation consistent with our laws of physics. It is, however,
the type of fantasy with discoverable rules. Even better, it's the type
of fantasy where discovering the rules is central to the story, for both
the characters and the readers, and the rules are worth the effort. This
is a world-building tour de force: one of the most engrossing and
deeply satisfying slow revelations that I have read in a long time. This
book is well over 500 pages, the plot never flags, new bits of
understanding were still slotting into place in the last chapter, and
there are lots of things I am desperately curious about that Lawrence left
for the rest of the series. If you like puzzling out the history and
rules of an invented world and you have anything close to my taste in
characters and setting, you are going to love this book.
(Also, there is at least one C.S. Lewis homage that I will not spoil but
that I thought was beautifully done and delightfully elaborated, and I am
fairly sure there is a conversation happening between this book and Philip
Pullman's His Dark Materials series that I didn't quite untangle
but that I am intrigued by.)
I do need to offer a disclaimer: Livira is precisely the type of character
I love reading about. She's stubborn, curious, courageous, persistent,
egalitarian, insatiable, and extremely sharp. I have a particular soft
spot for exactly this protagonist, so adjust the weight of my opinion
accordingly. But Lawrence also makes excellent use of her as a spotlight
to illuminate the world-building. More than anything else in the world,
Livira wants to understand, and there is so much here to
understand.
There is an explanation for nearly everything in this book, and those
explanations usually both make sense and prompt more questions. This is
such a tricky balance for the writer to pull off! A lot of world-building
of this sort fails either by having the explanations not live up to the
mysteries or by tying everything together so neatly that the stakes of the
world collapse into a puzzle box. Lawrence avoids both failures. This
world made sense to me but remained sufficiently messy to feel like humans
were living in it. I also thought the pacing and timing were impeccable:
I figured things out at roughly the same pace as the characters, and
several twists and turns caught me entirely by surprise.
I do have one minor complaint and one caveat. The minor complaint is that
I thought one critical aspect of the ending was a little bit too neat and
closed. It was the one time in the book where I thought Lawrence
simplified his plot structure rather than complicated it, and I didn't
like the effect it had on the character dynamics. There is, thankfully,
the promise of significant new complications in the next book.
The caveat is a bit harder to put my finger on, but a comparison to Alaya
Dawn Johnson's The Library of Broken
Worlds might help. That book was also about a library, featured a
protagonist thrown into the deep end of complex world-building, and put
discovery of the history and rules at the center of the story. I found
the rules structure of The Book That Wouldn't Burn more
satisfyingly complicated and layered, in a way that made puzzle pieces fit
together in my head in a thoroughly enjoyable way. But Johnson's book is
about very large questions of identity, history, sacrifice, and pain, and
it's full of murky ambiguity and emotions that are only approached via
metaphor and symbolism. Lawrence's book is far more accessible, but the
emotional themes are shallower and more straightforward. There is a
satisfying emotional through-line, and there are some larger issues at
stake, but it won't challenge your sense of morality and justice the way
that The Library of Broken Worlds might. I think which of those
books one finds better will depend on what mood you're in and what reading
experience you're looking for.
Personally, I was looking for a scrappy, indomitable character who would
channel her anger into overcoming every obstacle in the way of thoroughly
understanding her world, and that's exactly what I got. This was my most
enjoyable reading experience of the year to date and the best book I've
read since Some Desperate Glory.
Fantastic stuff, highly recommended.
Followed by The Book That Broke the World, and the ending is a bit
of a cliffhanger so you may want to have that on hand. Be warned that the
third book in the series won't be published until 2025.
In the red corner, weighing in at… nah, I’m not going to do that schtick.
The plaintiff in the case is Alegeus Technologies, LLC, a Delaware Corporation that, according to their filings, “is a leading provider of a business-tobusiness, white-label funding and payment platform for healthcare carriers and third-party administrators to administer consumer-directed employee benefit programs”.
Not being subject to the US’ bonkers health care system, I have only a passing familiarity with the sorts of things they do, but presumably it involves moving a lot of money around, which is sometimes important.
The CA/Browser Forum Baseline Requirements (BRs) (which all CAs are required to adhere to, by virtue of their being included in various browser and OS trust stores), say that revocation is required within 24 hours when “[t]he CA obtains evidence that the validation of domain authorization or control for any Fully‐Qualified Domain Name or IP address in the Certificate should not be relied upon” (section 4.9.1.1, point 5).
DigiCert appears to have at least tried to do the right thing, by opening the above Mozilla bug giving some details of the problem, and notifying their customers that their certificates were going to be revoked.
One may quibble about how fast they’re doing it, but they’re giving it a decent shot, at least.
A complicating factor in all this is that, only a touch over a month ago, Google Chrome announced the removal of another CA, Entrust, from its own trust store program, citing “a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports”.
Many of these compliance failures were failures to revoke certificates in a timely manner.
One imagines that DigiCert would not like to gain a reputation for tardy revocation, particularly at the moment.
The Legal Action
Now we come to Alegeus Technologies.
They’ve opened a civil case whose first action is to request the issuance of a Temporary Restraining Order (TRO) that prevents DigiCert from revoking certificates issued to Alegeus (which the court has issued).
This is a big deal, because TROs are legal instruments that, if not obeyed, constitute contempt of court (or something similar) – and courts do not like people who disregard their instructions.
That means that, in the short term, those certificates aren’t getting revoked, despite the requirement imposed by root stores on DigiCert that the certificates must be revoked.
DigiCert is in a real “rock / hard place” situation here: revoke and get punished by the courts, or don’t revoke and potentially (though almost certainly not, in the circumstances) face removal from trust stores (which would kill, or at least massively hurt, their business).
The reasons that Alegeus gives for requesting the restraining order is that “[t]o
Reissue and Reinstall the Security Certificates, Alegeus must work with and
coordinate with its Clients, who are required to take steps to rectify the
certificates. Alegeus has hundreds of such Clients. Alegeus is generally
required by contract to give its clients much longer than 24 hours’ notice
before executing such a change regarding certification.”
In the filing, Alegeus does acknowledge that “DigiCert is a voluntary member of the Certification Authority Browser Forum (CABF), which has bylaws stating that certificates with an issue in their domain validation must be revoked within 24 hours.”
This is a misstatement of the facts, though.
It is the BRs, not the CABF bylaws, that require revocation, and the BRs apply to all CAs that wish to be included in browser and OS trust stores, not just those that are members of the CABF.
In any event, given that Alegeus was aware that DigiCert is required to revoke certificates within 24 hours, one wonders why Alegeus went ahead and signed agreements with their customers that required a lengthy notice period before changing certificates.
What complicates the situation is that there is apparently a Master Services Agreement (MSA) that states that it “constitutes the entire agreement between the parties” – and that MSA doesn’t mention certificate revocation anywhere relevant.
That means that it’s not quite so cut-and-dried that DigiCert does, in fact, have the right to revoke those certificates.
I’d expect a lot of “update to your Master Services Agreement” emails to be going out from DigiCert (and other CAs) in the near future to clarify this point.
Not being a lawyer, I can’t imagine which way this case might go, but there’s one thing we can be sure of: some lawyers are going to able to afford that trip to a tropical paradise this year.
The Security Issues
The requirement for revocation within 24 hours is an important security control in the WebPKI ecosystem.
If a certificate is misissued to a malicious party, or is otherwise compromised, it needs to be marked as untrustworthy as soon as possible.
While revocation is far from perfect, it is the best tool we have.
In this court filing, Alegeus has claimed that they are unable to switch certificates with less than 24 hours notice (due to “contractual SLAs”).
This is a pretty big problem, because there are lots of reasons why a certificate might need to be switched out Very Quickly.
As a practical example, someone with access to the private key for your SSL certificate might decide to use it in a blog post.
Letting that sort of problem linger for an extended period of time might end up being a Pretty Big Problem of its own.
An organisation that cannot respond within hours to a compromised certificate is playing chicken with their security.
The Takeaways
Contractual obligations that require you to notify anyone else of a certificate (or private key) changing are bonkers, and completely antithetical to the needs of the WebPKI.
If you have to have them, you’re going to want to start transitioning to a private PKI, wherein you can do whatever you darn well please with revocation (or not).
As these sorts of problems keep happening, trust stores (and hence CAs) are going to crack down on this sort of thing, so you may as well move sooner rather than later.
If you are an organisation that uses WebPKI certificates, you’ve got to be able to deal with any kind of certificate revocation event within hours, not days.
This basically boils down to automated issuance and lifecycle management, because having someone manually request and install certificates is terrible on many levels.
There isn’t currently a completed standard for notifying subscribers if their certificates need premature renewal (say, due to needing to be revoked), but the ACME Renewal Information Extension is currently being developed to fill that need.
Ask your CA if they’re tracking this standards development, and when they intend to have the extension available for use.
(Pro-tip: if they say “we’ll start doing development when the RFC is published”, run for the hills; that’s not how responsible organisations work on the Internet).
The diffoscope maintainers are pleased to announce the release of diffoscope
version 273. This version includes the following changes:
[ Chris Lamb ]
* Factor out version detection in test_jpeg_image. (Re:
reproducible-builds/diffoscope#384)
* Ensure that 'convert' is from Imagemagick 6.x; we will need to update a
few things with IM7. (Closes: reproducible-builds/diffoscope#384)
* Correct import of identify_version after refactoring change in 037bdcbb0.
[ Mattia Rizzolo ]
* tests:
+ Add OpenSSH key test with a ed25519 key.
+ Skip the OpenSSH test with DSA key if openssh is >> 9.7
+ Support ffmpeg >= 7 that adds some extra context to the diff
* Do not ignore testing in gitlab-ci.
* debian:
+ Temporarily remove aapt, androguard and dexdump from the build/test
dependencies as they are not available in testin/trixie. Closes: #1070416
+ Bump Standards-Version to 4.7.0, no changes needed.
+ Adjust options to make sure not to pack the python s-dist directory
into the debian source package.
+ Adjust the lintian overrides.
With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.
In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:
Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:
Finally, let’s launch the debian-installer using a preseed.cfg file, that will automatically install Netplan (netplan-generator) for us in the target system. A minimal preseed file could look like this:
For this demo, we’re installing the full netplan.io package (incl. the interactive Python CLI), as well as the netplan-generator package and systemd-resolved, to show the full Netplan experience. You can choose the preseed file from a set of different variants to test the different configurations:
We’re using the linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the official debian-installer in its netboot/gtk form:
Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.
After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.
During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.
Done! After the installation finished, you can reboot into your virgin Debian Sid/Trixie system.
To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was modified by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:
Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.
In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:
Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, find us at GitHub:netplan.
Recently, Ola started rolling out Ola Maps in their main mobile app, replacing Google Maps, while also offering maps as a service to other organizations. The interesting part for me was the usage of OpenStreetMap data as base map with Ola’s proprietary data sources. I’ll mostly about talk about map data part here.
Screenshot of Ola App. OpenStreetMap attribution is shown after clicking the Ola Map icon.
OpenStreetMap (OSM) for starters, is a community owned and edited map data resource which gives freedom to use map data for any purpose. This includes the condition that attribution is given back to OSM which in turn ideally would encourage other users to contribute, correct and edit, helping everyone in turn. Due to this, OSM is also regarded as Wikipedia of maps. OSM data is not just used by Ola. Many others use it for various purposes like Wikipedia Maps, Strava Maps, Snapchat Map, bus tracking in GoIbibo/Redbus.
OSM India community has been following Ola map endeavor to use and contribute to OSM since they went public. As required by OSM for organized mapping efforts, Ola created wiki entry with information regarding their editors, usage, policy and mentions following as their data usage case:
OSM data is used for the road network, traffic signs and signals, buildings, natural features, landuse polygons and some POIs.
Creating a map product is a task in itself, an engineering hurdle creating the tech stack for collection, validation, import and serving the map and the map data part. Ola has done a good job describing the development of tech stack in their blog post. Ola holds an enormous corpus of live and constantly updated GPS trace data. Their drivers, users, and delivery partners generate those, which they harness to validate, correct and add missing map data. Ola employees now regularly contribute new or missing roads (including adding dual carriageway to existing ones), fix road geometry, classification, road access type and restrictions pan India. They have been active and engaging in OSM India community channels, though community members have raised some concerns on their OSM edit practices.
Ola’s venture into the map industry isn’t something out of the ordinary. Grab, a South East Asian company which has business interests in food deliveries, ride hailing and a bunch of other services too switched to their in-house map based on OpenStreetMap, followed by launching of their map product. Grab too contributed back data like Ola. Both Ola and Grab heavily rely on map for their business operations and seem to chose to go independent for it, bootstrapping the products on OSM.
Ola could have gone their own route, bootstrapping map data from scratch, which would have been a gargantuan task when you’re competing against the likes of Google Maps and Bing Maps, which have been into this since many years. Deciding to use OSM and actively giving back to make data better for everyone deserves accolades. Now I’m waiting to for their second blog post, which they mention would be on map data.
If you’re an Ola map user through Ola Electric or Ola app, and find some road unmapped, you can always edit them in OSM. What I have heard from their employee, they import new OSM data weekly, which means your changes should start reflecting for you (and everyone else) by next week. If you’re new, follow Beginners’ guide and join OSM India community community.osm.be/resources/asia/india/ for any doubts and participating in various mapping events.
PS — You can see live OSM edits in India subcontinent here.
Combining BGP confederations and AS override can potentially
create a BGP routing loop, resulting in an indefinitely expanding AS path.
BGP confederation is a technique used to reduce the number of iBGP sessions
and improve scalability in large autonomous systems (AS). It divides an AS into
sub-ASes. Most eBGP rules apply between sub-ASes, except that next-hop, MED, and
local preferences remain unchanged. The AS path length ignores contributions
from confederation sub-ASes. BGP confederation is rarely used and BGP route
reflection is typically preferred for scaling.
AS override is a feature that allows a router to replace the ASN of a
neighbor in the AS path of outgoing BGP routes with its own. It’s useful when
two distinct autonomous systems share the same ASN. However, it interferes with
BGP’s loop prevention mechanism and should be used cautiously. A safer
alternative is the allowas-in directive.1
In the example below, we have four routers in a single confederation, each in
its own sub-AS. R0 originates the 2001:db8::1/128 prefix. R1, R2, and
R3 forward this prefix to the next router in the loop.
BGP routing loop using a confederation
The router configurations are available in a Git repository. They are
running Cisco IOS XR. R2 uses the following configuration for BGP:
The session with R3 uses both as-override and next-hop-self directives.
The latter is only necessary to make the announced prefix valid, as there is no
IGP in this example.2
Here’s the sequence of events leading to an infinite AS path:
R1 selects it as the best path, forwarding it to R2 with AS
path (64501 64500).
R2 selects it as the best path, forwarding it to R3 with AS
path (64500 64501 64502).
R3 selects it as the best path. It would forward it to R1 with AS path
(64503 64502 64501 64500), but due to AS override, it substitutes R1’s
ASN with its own, forwarding it with AS path (64503 64502 64503 64500).
R1 accepts the prefix, as its own ASN is not in the AS path. It compares
this new prefix with the one from R0. Both (64500) and (64503 64502
64503 64500) have the same length because confederation sub-ASes don’t
contribute to AS path length. The first tie-breaker is the router ID.
R0’s router ID (1.0.0.4) is higher than R3’s (1.0.0.3). The new
prefix becomes the best path and is forwarded to R2 with AS path (64501
64503 64501 64503 64500).
R2 receives the new prefix, replacing the old one. It selects it as the
best path and forwards it to R3 with AS path (64502 64501 64502 64501
64502 64500).
R3 receives the new prefix, replacing the old one. It selects it as the
best path and forwards it to R0 with AS path (64503 64502 64503 64502
64503 64502 64500).
R1 receives the new prefix, replacing the old one. Again, it competes with
the prefix from R0, and again the new prefix wins due to the lower router
ID. The prefix is forwarded to R2 with AS path (64501 64503 64501 64503
64501 64503 64501 64500).
A few iterations later, R1 views the looping prefix as follows:4
RP/0/RP0/CPU0:R1#showbgpipv6u2001:db8::1/128bestpath-compare
BGP routing table entry for 2001:db8::1/128Last Modified: Jul 28 10:23:05.560 for 00:00:00Paths: (2 available, best #2) Path #1: Received by speaker 0 Not advertised to any peer (64500) 2001:db8::1:0 from 2001:db8::1:0 (1.0.0.4), if-handle 0x00000000 Origin IGP, metric 0, localpref 100, valid, confed-external Received Path ID 0, Local Path ID 0, version 0 Higher router ID than best path (path #2) Path #2: Received by speaker 0 Advertised IPv6 Unicast paths to peers (in unique update groups): 2001:db8::2:1 (64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64500) 2001:db8::4:0 from 2001:db8::4:0 (1.0.0.3), if-handle 0x00000000 Origin IGP, metric 0, localpref 100, valid, confed-external, best, group-best Received Path ID 0, Local Path ID 1, version 37 best of AS 64503, Overall best
There’s no upper bound for an AS path, but BGP messages have size limits (4096
bytes per RFC 4271 or 65535 bytes per RFC 8654). At some
point, BGP updates can’t be generated. On Cisco IOS XR, the BGP process crashes
well before reaching this limit.5
The main lessons from this tale are:
never use BGP confederations under any circumstances, and
be cautious of features that weaken BGP routing loop detection.
When using BGP confederations with Cisco IOS XR, use
allowconfedas-in instead. It’s available since IOS XR 7.11. ↩︎
Using BGP confederations is already inadvisable. If you don’t use the
same IGP for all sub-ASes, you’re inviting trouble! However, the scenario
described here is also possible with an IGP. ↩︎
When an AS path segment is composed of ASNs from a confederation, it
is displayed between parentheses. ↩︎
By default, IOS XR paces eBGP updates. This is controlled by the
advertisement-interval directive. Its default value is 30 seconds for eBGP
peers (even in the same confederation). R1 and R2 set this value to 0,
while R3 sets it to 2 seconds. This gives some time to watch the AS path
grow. ↩︎
This is CSCwk15887. It only happens when using as-override on an
AS path with a too long AS_CONFED_SEQUENCE. This should be fixed around
24.3.1. ↩︎
This week our family suffered another loss with my brother in-law. We will miss him dearly. On our way down to Phoenix to console our nephew that just lost his dad our car blew up. Last week we were in a roll over accident that totaled our truck and left me with a broken arm. We are now in great need of a new vehicle. Please consider donating to this fund: https://gofund.me/033eb25d . Kubuntu is out of money and I am between work packages with the ‘project’. We are 50 miles away from the closest town for supplies, essentials such as water requires a vehicle.
I have had bad years before ( covid ) in which I lost my beloved job at Blue Systems. I made a vow to myself to never let my personal life affect my work again. I have so far kept that promise to myself and without further ado I present to you my work.
Kubuntu:
Many SRUs awaiting verification stage including the massive apparmor policy bug.
sddm fix for the black screen on second boot has passed verification and should make .1 release.
See Debian for the qt6 Plasma / applications work.
Debian:
qtmpv – in NEW
arianna – in NEW
kamera – uploading today
kcharselect – Experimental
Tokodon – Done, but needs qtmpv to pass NEW
Gwenview – WIP needs kamera, kio-extras
kio-extras – WIP
KDE Snaps:
Please note: for the most part the Qt6 snaps are in –edge except the few in the ‘project’ that are heavily tested. Please help test the –edge snaps so I can promote them.
Elisa
Okular
Konsole ( please note this is a confined terminal for the ‘project’ and not very useful except to ssh to the host system )
Kwrite
Gwenview
Kate ( –classic )
Gcompris
Alligator
Ark
Blinken
Bomber
Bovo
Calindori
Digikam
Dragon
Falkon
Filelight
WIP Snaps or MR’s made
KSpacedual
Ksquares
KSudoku
KTuberling
Kubrick
lskat
Palapeli
Kajongg
Kalzium
Kanagram
Kapman
Katomic
KBlackBox
KBlocks
KBounce
KBreakOut
KBruch
Please note that 95% of the snaps are free-time work. The project covers 5. I am going as fast as I can between Kubuntu/Debian and the project commitments. Not to mention I have only one arm! My GSOC student is also helping which you can read all about here: https://soumyadghosh.github.io/website/interns/gsoc-2024/gsoc-week-3-week-7/
There is still much work to do in Kubuntu to be Plasma 6 ready for Oracular and they are out of funds. I will still continue my work regardless, but please consider donating until we can procure a consistent flow of funding : https://kubuntu.org/donate/
In mid-June I picked up an unknown infection in my left ankle which turned out to be antibiotic resistant. The infection caused cellulitis. After five weeks of trial and error and treatment, the infection is beaten but I am still recovering from the cellulitis. I don’t know how long it will take to be fully recovered, nor how long before I can be “useful” again: I’m currently off work (and thus off my open source and other commitments too). Hopefully soon! That’s why I’ve been quiet.
More than two years after I last uploaded the purity-off Debian
package, its autopkgtest (the Debian distribution-wide continuous
integration system) started failing on arm64, and only on this architecture.
The failing test
is very simple: it prints a long stream of "y" or "n"
characters to purity(6)'s
standard input and then checks the output for the expected result.
While investigating the live autopkgtest system, I figured out that:
The paging function of purity(6) became enabled, but only
on arm64!
Paging consumed more "y" characters from standard input
than the 5000 provided by the test script.
The paging code does not consider EOF a valid input, so at that point
it would start asking again and printing "--- more ---" forever in
a tight loop.
And this output, being redirected to a file, would fill the file
system where the autopkgtest is running.
I did not have time to verify this, but I have noticed that the
25 years old purity(6) program calls TIOCGWINSZ
to determine the screen length, and then uses the results in the
answer buffer without checking if the
ioctl(2)
call returned an error.
Which it obviously does in this case, because standard input is not a
console but a pipe.
So my theory is that paging is enabled because the undefined result of
the ioctl has changed, and only on this architecture.
Since I do not want to fix purity(6) right now, I have
implemented the workaround of printing many more "y"
characters as input.
DebConf24, the 25th annual
Debian Developer Conference, is taking place in
Busan, Republic of Korea from July 28th to August 4th, 2024.
Debian contributors from all over the world have come together at Pukyong
National University, Busan, to participate and work in a conference exclusively
ran by volunteers.
Today the main conference starts with around 340 expected attendants and over 100
scheduled activities, including 45-minute and 20-minute talks, Bird of a Feather
("BoF") team meetings, workshops, a job fair, as well as a variety
of other events.
The full schedule is updated each
day, including activities planned ad-hoc by attendees over the course of the
conference.
If you would like to engage remotely, you can follow the video streams
available from the DebConf24 website for the
events happening in the three talk rooms: Bada, Somin and Pado.
Or you can join the conversations happening inside the talk rooms via the
OFTC IRC network in the
#debconf-bada,
#debconf-somin, and
#debconf-pado channels.
Please also join us in the #debconf channel for
common discussions related to DebConf.
You can also follow the live coverage of news about DebConf24 provided by our
micronews service or the @debian profile on
your favorite social network.
DebConf is committed to a safe and welcoming environment for all participants.
Please see our Code of Conduct page
for more information on this.
Debian thanks the commitment of numerous sponsors to support DebConf24,
particularly our Platinum Sponsors: Proxmox, Infomaniak and
Wind River.
DebConf24, the 25th edition of the Debian
conference is taking place in Pukyong National University at Busan, Republic of
Korea.
Thanks to the hard work of its organizers, it again will be an interesting and
fruitful event for attendees.
We would like to warmly welcome the sponsors of DebConf24, and introduce them to
you.
We have three Platinum sponsors.
Proxmox is the first Platinum sponsor.
Proxmox provides powerful and user-friendly Open Source server software.
Enterprises of all sizes and industries use Proxmox solutions to deploy
efficient and simplified IT infrastructures, minimize total cost
of ownership, and avoid vendor lock-in.
Proxmox also offers commercial support, training services, and an extensive
partner ecosystem to ensure business continuity for its customers.
Proxmox Server Solutions GmbH was established in 2005
and is headquartered in Vienna, Austria.
Proxmox builds its product offerings on top of the Debian operating system.
Our second Platinum sponsor is Infomaniak.
Infomaniak is an independent cloud service provider recognised throughout
Europe for its commitment to privacy, the local economy and the environment.
Recording growth of 18% in 2023, the company is developing a suite of online
collaborative tools and cloud hosting, streaming, marketing and events
solutions.
Infomaniak uses exclusively renewable energy, builds its own data centers and
develops its solutions in Switzerland, without relocating.
The company powers the website of the Belgian radio and TV
service (RTBF) and provides streaming for more than 3,000
TV and radio stations in Europe.
Wind River is our third Platinum sponsor.
For nearly 20 years, Wind River has led in commercial Open Source Linux
solutions for mission-critical enterprise edge computing.
With expertise across aerospace, automotive, industrial, telecom, and more,
the company is committed to Open Source through initiatives like eLxr, Yocto,
Zephyr, and StarlingX.
Our Gold sponsors are:
Ubuntu,
the Operating System delivered by Canonical.
Freexian,
a services company specialized in Free Software and in particular Debian
GNU/Linux, covering consulting, custom developments, support and training.
Freexian has a recognized Debian expertise thanks to the participation of
Debian developers.
Lenovo,
a global technology leader manufacturing a wide portfolio of connected
products including smartphones, tablets, PCs and workstations as well as
AR/VR devices, smart home/office and data center solutions.
Korea Tourism Organization,
which purpose is to advance tourism as a key driver for national economic
growth and enhancement of national welfare and intends to be a public
organization that makes the Korean people happier; it promotes national wealth
through tourism.
Busan IT Industry Promotion Agency,
an industry promotion organization that contributes to the innovation of the
digital economy with the power of IT and CT and
supports the ecosystem for innovative local startups and companies to grow.
Microsoft,
who enables digital transformation for the era of an intelligent cloud and an
intelligent edge.
Its mission is to empower every person and every organization on the planet to
achieve more.
doubleO,
a company that specializes in consulting and developing empirical services
using big data analysis and artificial intelligence.
doubleO provides a variety of data-centered services together with small and
medium-sized businesses in Busan/Gyeongnam.
Our Silver sponsors are:
Roche, a major international pharmaceutical
provider and research company dedicated to personalized healthcare.
Two Sigma, rigorous inquiry, technology
data science, and invention to bring science to finance and help solve the
toughest challenges across financial services.
Arm: leading technology provider of processor
IP, Arm powered solutions have been supporting innovation for
more than 30 years and are deployed in over 280 billion chips to date.
Google, one of the largest technology companies in
the world, providing a wide range of Internet-related services and products
such as online advertising technologies, search, cloud computing, software,
and hardware.
FSIJ, the Free Software Initiative
of Japan, a non-profit organization dedicated to supporting Free Software
growth and development.
Busan Tourism Organisation:
leading public corporation that generates social and economic values in Busan
tourism industry, developing tourism resources in accordance with government
policies and invigorate tourism industry.
Civil Infrastructure Platform,
a collaborative project hosted by the Linux Foundation,
establishing an open source “base layer” of industrial grade software.
Collabora, a global consultancy delivering
Open Source software solutions to the commercial world.
Matanel Foundation, which operates in Israel,
as its first concern is to preserve the cohesion of a society and a nation
plagued by divisions.
Thanks to all our sponsors for their support!
Their contributions make it possible for a large number of Debian contributors
from all over the globe to work together, help and learn from each other in
DebConf24.
DebCamp has started, and in a couple of days, we will fully be in DebConf24
mode!
As most of you know, an important part that binds Debian together is our
cryptographic identity assurance, and that is in good measure tightened by the
Continuous Key-Signing Parties we hold at DebConfs and other Debian and Free
Software gatherings.
As I have done during (most of) the past DebConfs, I have prepared a set of
pseudo-social maps to help you find where you are in the OpenPGP mesh of our
conference. Naturally, Web-of-Trust maps should be user-centered, so find your
own at:
The list is now final and it will not receive any modifications (I asked
for them some days
ago); if your
name still appears on the list and you don’t want to be linked to the DC24 KSP
in the future, tell me and I’ll remove it from future versions of the list
(but it is part of the final DC24
file, as its checksum
is already final)
Speaking of which!
If you are to be a part of the keysigning, get the final DC24
file and, on a device
you trust, check its SHA256 by running:
Make sure the resulting number matches the one I’m presenting. If it doesn’t,
ensure your copy of the file is not corrupted (i.e. download again). If it still
doess not match, notify me immediately.
Does any of the above confuse you? Please come to (or at least, follow the
stream for) my session on DebConf opening day, Continuous Key-Signing Party
introduction,
10:30 Korean time; I will do my best to explain the details to you.
PS- I will soon provide a simple, short PDF that will probably be mass-printed
at FrontDesk so that you can easily track your KSP progress.
A new minor release 0.4.23 of RQuantLib
just arrived at CRAN earlier
today, and will be uploaded to Debian in due course.
QuantLib is a rather
comprehensice free/open-source library for quantitative
finance. RQuantLib
connects (some parts of) it to the R environment and language, and has
been part of CRAN for more than
twenty-two years (!!) as it was one of the first packages I
uploaded.
This release of RQuantLib
updates to QuantLib version 1.35
released this morning. It accommodates some removals following earlier
deprecations, and also updates most of the code in the function for a
more readable and compact form of creating shared pointers via
make_shared() along with auto.
Changes in RQuantLib version 0.4.23 (2024-07-23)
Adjustments for QuantLib 1.35 and removal of deprecated code (in
utility functions and dividend case of vanilla options)
Adjustments for new changes in QuantLib 1.35
Refactoring most C++ files making more use of both
auto and make_shared to simplify and shorten
expressions
The twelveth release of the qlcal package
arrivied at CRAN today.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases synchronizes qlcal with
the QuantLib release 1.35 (made
today) and contains more updates to 2024 calendars.
Changes in version 0.0.12
(2024-07-22)
Synchronized with QuantLib 1.35 released today
Calendar updates for Chile, India, United States, Brazil
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples. If you
like this or other open-source work I do, you can sponsor me at
GitHub.
On the 18th of May I blogged about my new 5120*2160 monitor [1]. One thing I noted was that one Netflix movie had run in an aspect ratio that used all space on the monitor. I still don’t know if the movie in question was cropped in a letterbox manner but other Netflix shows in “full screen” mode don’t extend to both edges. Also one movie I downloaded as in 3840*1608 resolution which is almost exactly the same aspect ratio as my monitor. I wonder if some company is using 5120*2160 screens for TVs, 4K and FullHD are rumoured to be cheaper than most other resolutions partly due to TV technology being used for monitors. There is the Anamorphic Format of between 2.35:1 and 2.40:1 [2] which is a close match for the 2.37:1 of my new monitor.
I tried out the HDMI audio on a Dell laptop and my Thinkpad Yoga Gen3 and found it to be of poor quality, it seemed limited to 2560*1440, at this time I’m not sure how much of the fault is due to the laptops and how much is due to the monitor. The monitor docs state that it needs HDMI version 2.1 which was released in 2017 and my Thinkpad Yoga Gen3 was released in 2018 so probably doesn’t have that. The HDMI cable in question did 4K resolution on my previous monitor so it should all work at a minimum of 4K resolution.
The switching between inputs is a problem. If I switch between DisplayPort for my PC and HDMI for a laptop the monitor will usually timeout before the laptop establishes a connection and then switch back to the DisplayPort input. So far I have had to physically disconnect the input source I don’t want to use. The DisplayPort switch that I’ve used doesn’t seem designed to work with resolutions higher than 4K.
I’ve bought a new USB-C dock which is described as doing 8K which means that as my Thinkpad is described as supporting 5120×2880@60Hz over USB-C I should be able to get 5120*2160 without any problems, however for unknown reasons I only get 4K. For work I’m using a Dell Latitude 7400 2in1 that’s apparently only capable of 4096*2304 @24 Hz which is less pixels than 5120*2160 and it will also only do 4K resolution. But for both those cases it’s still a significant improvement over 2560*1440. I tested with a Dell Latitude 7440 which gave the full 5120*2160 resolution, I was unable to find specs on what the maximum resolution of the 7440 is. I also have bought DisplayPort switch rated at 8K resolution. I got a switch that doesn’t also do USB because the ones that do 8K resolution and USB are about $70. The only KVM switch I saw for 8K resolution at a reasonable price was one designed for switching between two laptops and there doesn’t seem to be any adaptors to convert from regular DisplayPort to USB-C alternative mode so that wasn’t viable. Currently I have the old KVM switch used for USB only (for keyboard and mouse) and the new switch which only does DisplayPort. So I have two buttons to push when switching between input sources which isn’t too bad.
It seems that for PCs resolutions with more pixels than 4K are as difficult and inconvenient now as 4K was 6 years ago when I started doing it. If you want higher than 4K resolution to just work at this time then you need Apple hardware.
The monitor has a series of modes for different types of output, I’ve found “standard” to be good for text and “movie” to be good for watching movies/TV and for playing RTS games. I previously wrote about how to use ddcutil to use a monitor as a KVM switch [3], unfortunately I can’t do this with the new monitor as the time that the monitor waits for a good signal on a new input after changing is shorter than the time taken for Linux on the laptops I’m using to enable HDMI output. I’ve found the following commands to do the basics.
# get display mode
ddcutil getvcp DC
# set standard mode
ddcutil setvcp DC 0
# set movie mode
ddcutil setvcp DC 03
Now that I have that going the next thing I want to do is to have it switch between “standard” and “movie” modes when I switch keyboard focus.
The Akismet WordPress anti-spam plugin has changed it’s policy to not run on sites that have adverts which includes mine. Without it I get an average of about 1 spam comment per hour and the interface for removing spam takes more mouse actions than desired. For email spam it’s about the same volume half of which is messages with SpamAssassin scores high enough to go into the MaybeSpam folder (that I go through every few weeks) and half of which goes straight to my inbox. But fortunately moving spam to a folder where I can later use it to train Bayesian classification is a much faster option on PC and is also something I can do from my phone MUA.
My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first.
Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post).
networkd
Given how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host.
I used the following minimalistic /etc/systemd/network/dhcp.network:
[Match]
Name=en* wl*
[Network]
DHCP=yes
This correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved.
However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):
Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.
Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.
Here comes the 3rd article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.
Enjoy also this read on Guido's work on Polis,
Mike
Issues extending Polis and adjusting our goals (this article)
Creating (a) new frontend(s) for Polis
Current status and roadmap
Polis - Issues extending Polis and adjusting our Goals
After the initial implementation of limited branding support, user
feedback and the involvement of an UX designer lead to the conclusion that we
needed more far-reaching changes to the user interface in order to reduce
visual clutter, rearrange and improve UI elements, and provide better
integration with the websites in which conversations are embedded.
Challenges when visualizing Data in Polis
Polis visualizes groups using a spatial projection of users based on
similarities in voting behavior and places them in two to five groups
using a clustering algorithm.
During our testing and evaluation users were rarely able to interpret the
visualization and often intuitively made incorrect assumptions e.g. by
associating the filled area of a group with its significance or size. After
consultation with a member of the Multi-Agent Systems (MAS) Group at the
University of Groningen we chose to temporarily replace the visualization
offered by Polis with simple bar charts representing agreement or disagreement
with statements of a group or the majority. We intend to revisit this and
explore different forms of visualization at a later point in time.
The different factors playing into the weight attached to statements which
determine the pseuodo-random order in which they are presented for voting
(“comment routing”) proved difficult to explain to stakeholders and users and
the admission of the ad-hoc and heuristic nature of the used algorithm1 by
Polis’ authors lead to the decision to temporarily remove this feature.
Instead, statements should be placed into three groups, namely
metadata questions,
seed statements,
and participant statements
Statements should then be sorted by group but in a fully randomized order
within the group so that metadata questions would be presented before seed
statements which would be presented before participant’s statements. This
simpler method was deemed sufficient for the scale of our pilot projects,
however we intend to revisit this decision and explore different methods of
“comment routing” in cooperation with our scientific partners at a later point
in time.
An evaluation of the requirements for implementing mandatory authentication and
adding support for additional authentication methods to Polis showed that
significant changes to both the administration and participation frontend were
needed due to a lack of an abstraction layer or extension mechanism and the
current authentication providers being hardcoded in many parts of the code
base.
A New Frontend is born: Particiapp
Based on the implementation details of the participation frontend, the invasive
nature of the changes required, and the overhead of keeping up with active
upstream development it became clear that a different, more flexible approach
to development was needed. This ultimately lead to the creation of
Particiapp, a new Open Source project providing the building blocks and
necessary abstraction layers for rapid protoyping and experimentation with
different fontends which are compatible with but independent from Polis.
Small, Christopher T., Bjorkegren, Michael, Erkkilä, Timo, Shaw, Lynette and Megill, Colin (2021). Polis: Scaling deliberation by mapping high dimensional opinion spaces. Recerca. Revista de Pensament i Anàlisi, 26(2), pp. 1-26. ↩
The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I’ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven’t submitted this for inclusion in the reference policy because it’s for proprietary software and also it permits many operations that we would prefer not to permit.
The policy is after the break because it’s larger than you want on a Planet feed. But first I’ll give a few selected lines that are bad in a noteworthy way:
sys_admin means the ability to break everything
dac_override means break Unix permissions
mknod means a daemon creates devices due to a lack of udev configuration
sys_rawio means someone didn’t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can’t imagine a good use for, the Reference Policy doesn’t use this anywhere!
dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
storage_raw_write_fixed_disk shouldn’t be needed for this sort of thing, it doesn’t do anything that involves managing partitions.
Now without network access or other obvious ways of remote control this level of access while excessive isn’t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there’s nothing to stop it from giving the worst results.
Leonardo and I are happy to
announce the release of another maintenance release 0.1.3 of our dtts package
which has been on CRAN for a
good two years now.
dtts
builds upon our nanotime
package as well as the beloved data.table to bring
high-performance and high-resolution indexing at the
nanosecond level to data frames. dtts aims to
offers the time-series indexing versatility of xts (and zoo) to the immense
power of data.table while
supporting highest nanosecond resolution.
This release contains two nice and focussed contributed pull
requests. Tomas Kalibera, who
as part of R Core looks after everything concerning R on Windows, and
then some, needed an adjustment for pending / upcoming R on Windows
changes for builds with LLVM which is what Arm-on-Windows uses. We
happily obliged: neither Leonardo nor I see much of
Windows these decades. (Easy thing to say on a day like today with its
crowdstrike
hammer falling!) Similarly, Michael Chirico supplied a
PR updating one of our tests to an upcoming change at data.table which we are of
course happy to support.
The short list of changes follows.
Changes in version 0.1.3
(2024-07-18)
Windows builds use localtime_s with LLVM (Tomas
Kalibera in #16)
Tests code has been adjusted for an upstream change in data.table tests for all.equal (Michael
Chirico in #18 addressing
#17)
Another thing to note is that include_directories adds both the source
directory and corresponding build directory to include path, so you don't
have to care.
Its documentation is not as easy to find as I'd like (kudos to Kangie on IRC),
and hopefully this blog post will make it easier for me to find it in the
future.
The Rcpp Core Team is once again pleased to announce a new release
(now at 1.0.13) of the Rcpp package.
It arrived on CRAN earlier
today, and has since been uploaded to Debian. Windows and macOS builds
should appear at CRAN in the next few days, as will builds in different
Linux distribution–and of course r2u should catch up
tomorrow too. The release was uploaded last week, but not only does Rcpp always gets flagged because of the
grandfathered .Call(symbol) but CRAN also found two packages
‘regressing’ which then required them to take five days to get back to
us. One issue was known; another
did not reproduce under our tests against over 2800 reverse dependencies
leading to the eventual release today. Yay. Checks are good and
appreciated, and it does take time by humans to review them.
This release continues with the six-months January-July cycle started
with release
1.0.5 in July 2020. As a reminder, we do of course make interim
snapshot ‘dev’ or ‘rc’ releases available via the Rcpp drat repo as well as
the r-universe page and
repo and strongly encourage their use and testing—I run my systems
with these versions which tend to work just as well, and are also fully
tested against all reverse-dependencies.
Rcpp has long established itself
as the most popular way of enhancing R with C or C++ code. Right now,
2867 packages on CRAN depend on
Rcpp for making analytical code go
faster and further, along with 256 in BioConductor. On CRAN, 13.6% of
all packages depend (directly) on Rcpp, and 59.9% of all compiled packages
do. From the cloud mirror of CRAN (which is but a subset of all CRAN
downloads), Rcpp has been downloaded
86.3 million times. The two published papers (also included in the
package as preprint vignettes) have, respectively, 1848 (JSS, 2011) and 324 (TAS, 2018)
citations, while the the book (Springer useR!,
2013) has another 641.
This release is incremental as usual, generally preserving existing
capabilities faithfully while smoothing our corners and / or extending
slightly, sometimes in response to changing and tightened demands from
CRAN or R standards. The move towards a
more standardized approach for the C API of R leads to a few changes;
Kevin did most of the PRs for this. Andrew Johnsom also provided a very
nice PR to update internals taking advantage of variadic templates.
The full list below details all changes, their respective PRs and, if
applicable, issue tickets. Big thanks from all of us to all
contributors!
Changes in
Rcpp release version 1.0.13 (2024-07-11)
Changes in Rcpp API:
Set R_NO_REMAP if not already defined (Dirk in #1296)
Add variadic templates to be used instead of generated code
(Andrew Johnson in #1303)
Count variables were switches to size_t to avoid
warnings about conversion-narrowing (Dirk in #1307)
Rcpp now avoids the usage of the (non-API) DATAPTR function when
accessing the contents of Rcpp Vector objects where possible. (Kevin in
#1310)
Rcpp now emits an R warning on out-of-bounds Vector accesses.
This may become an error in a future Rcpp release. (Kevin in #1310)
Switch VECTOR_PTR and STRING_PTR to new
API-compliant RO variants (Kevin in #1317 fixing #1316)
Changes in Rcpp Deployment:
Small updates to the CI test containers have been made (#1304)
I recently started to work with Nginx to explore the requirements on how to
configure a then so called server block. It’s quite different than within
Apache. But there are a tons of good websites out there which do explain the
different steps and options quite well. I also realized quickly that I need to
be able to configure my Nginx setups in a way so the content is delivered
through https with some automatic redirection from http URLs.
Let’s install Nginx
Installing Nginx
$ sudo apt update
$ sudo apt install nginx
Checking your Web Server
We can check now nginx service is active or inactive
Output
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-02-12 09:59:20 UTC; 3h ago
Docs: man:nginx(8)
Main PID: 2887 (nginx)
Tasks: 2 (limit: 1132)
Memory: 4.2M
CPU: 81ms
CGroup: /system.slice/nginx.service
├─2887 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─2890 nginx: worker process
Now we successfully installed nginx and it in running state.
How To Secure Nginx with Let’s Encrypt on Debian 12
In this documentation, you will use Certbot to obtain a free SSL certificate
for Nginx on Debian 12 and set up your certificate.
Step 1 — Installing Certbot
$ sudo apt install certbot python3-certbot-nginx
Certbot is now ready to use, but in order for it to automatically configure
SSL for Nginx, we need to verify some of Nginx’s configuration.
Step 2 — Confirming Nginx’s Configuration
Certbot needs to be able to find the correct server block in your Nginx
configuration for it to be able to automatically configure SSL. Specifically,
it does this by looking for a server_name directive that matches the domain you
request a certificate for. To check, open the configuration file for your
domain using nano or your favorite text editor.
Fillup above data your project wise and then save the file, quit your editor,
and verify the syntax of your configuration edits.
$ sudo nginx -t
Step 3 — Obtaining an SSL Certificate
Certbot provides a variety of ways to obtain SSL certificates through
plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading
the config whenever necessary. To use this plugin, type the following command
line.
$ sudo certbot --nginx -d example.com
The configuration will be updated, and Nginx will reload to pick up the new
settings. certbot will wrap up with a message telling you the process was
successful and where your certificates are stored.
Output
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2024-05-12. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Your certificates are downloaded, installed, and loaded. Next check the
syntax again of your configuration.
$ sudo nginx -t
If you get an error, reopen the server block file and check for any typos or
missing characters. Once your configuration file’s syntax is correct, reload
Nginx to load the new configuration.
$ sudo systemctl reload nginx
Try reloading your website using https:// and notice your browser’s security
indicator. It should indicate that the site is properly secured, usually with
a lock icon.
While I was living in Argentina, we (my family) found ourselves checking for
weather forecasts almost constantly — weather there can be quite unexpected,
much more so that here in Mexico. So it took me a bit of tinkering to come up
with a couple of simple scripts to show the weather forecast as part of my
Waybar setup. I haven’t cared to share
with anybody, as I believe them to be quite trivial and quite dirty.
I am using OpenWeather’s open
API. I had to register to get an APPID, and it
allows me for up to 1,000 API calls per day, more than plenty for my uses, even
if I am logged in at my desktops at three different computers (not an uncommon
situation). Having that, I set up a file named /etc/get_weather/, that
currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paraná, Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather, that fetches the
current weather and the forecast, and stores them as /run/weather.json and
/run/forecast.json:
#!/usr/bin/bashCONF_FILE=/etc/get_weather
if[-e"$CONF_FILE"];then."$CONF_FILE"else
echo"Configuration file $CONF_FILE not found"exit 1
fi
if[-z"$LAT"-o-z"$LONG"-o-z"$APPID"];then
echo"Configuration file must declare latitude (LAT), longitude (LONG) "echo"and app ID (APPID)."exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q"https://api.openweathermap.org/data/2.5/weather?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}"-O"${CURRENT}"
wget -q"https://api.openweathermap.org/data/2.5/forecast?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}"-O"${FORECAST}"
This script is called by the corresponding systemd service unit, found at
/etc/systemd/system/get_weather.service:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit,
/etc/systemd/system/get_weather.timer:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I’m not logged in, wasting some of my free API
calls… but within reason)
Then, I declare a "custom/weather" module in the desired position of my
~/.config/waybar/waybar.config, and define it as:
This script basically morphs a generic weather JSON description into another set
of JSON bits that display my weather in the way I prefer to have it displayed
as:
#!/usr/bin/rubyrequire'json'Sources={:weather=>'/run/weather.json',:forecast=>'/run/forecast.json'}Icons={'01d'=>'🌞',# d → day'01n'=>'🌃',# n → night'02d'=>'🌤️','02n'=>'🌥','03d'=>'☁️','03n'=>'🌤','04d'=>'☁️','04n'=>'🌤','09d'=>'🌧️','10n'=>'🌧 ','10d'=>'🌦️','13d'=>'❄️','50d'=>'🌫️'}ret={'text':nil,'tooltip':nil,'class':'weather','percentage':100}# Current weather report: Main text of the modulebeginweather=JSON.parse(open(Sources[:weather],'r').read)loc_name=weather['name']icon=Icons[weather['weather'][0]['icon']]||'?'+f['weather'][0]['icon']+f['weather'][0]['main']temp=weather['main']['temp']sens=weather['main']['feels_like']hum=weather['main']['humidity']wind_vel=weather['wind']['speed']wind_dir=weather['wind']['deg']portions={}portions[:loc]=loc_nameportions[:temp]='%s 🌡%2.2f°C (%2.2f)'%[icon,temp,sens]portions[:hum]='💧 %2d%%'%humportions[:wind]='🌬%2.2fm/s %d°'%[wind_vel,wind_dir]ret['text']=[:loc,:temp,:hum,:wind].map{|p|portions[p]}.join(' ')rescue=>errret['text']='Could not process weather file (%s ⇒ %s: %s)'%[Sources[:weather],err.class,err.to_s]end# Weather prevision for the following hours/daysbegincast=[]forecast=JSON.parse(open(Sources[:forecast],'r').read)min=''max=''day=Time.now.strftime('%Y.%m.%d')by_day={}forecast['list'].each_with_indexdo|f,i|by_day[day]||=[]time=Time.at(f['dt'])time_lbl='%02d:%02d'%[time.hour,time.min]icon=Icons[f['weather'][0]['icon']]||'?'+f['weather'][0]['icon']+f['weather'][0]['main']by_day[day]<<f['main']['temp']iftime.hour==0min='%2.2f'%by_day[day].minmax='%2.2f'%by_day[day].maxcast<<' ↑ min: <b>%s°C</b> max: <b>%s°C</b>'%[min,max]day=time.strftime('%Y.%m.%d')cast<<' ┍━━━━━┫ <b>%04d.%02d.%02d</b> ┠━━━━━┑'%[time.year,time.month,time.day]endcast<<'%s | %2.2f°C | 🌢%2d%% | %s %s'%[time_lbl,f['main']['temp'],f['main']['humidity'],icon,f['weather'][0]['description']]endcast<<' ↑ min: <b>%s</b>°C max: <b>%s°C</b>'%[min,max]ret['tooltip']=cast.join("\n")rescue=>errret['tooltip']='Could not process forecast file (%s ⇒ %s)'%[Sources[:forecast],err.class,err.to_s]end# Print out the result for Waybar to processputsret.to_json
The end result? Nothing too stunning, but definitively something I find useful
and even nicely laid out:
Do note that it seems OpenWeather will return the name of the closest available
meteorology station with (most?) recent data — for my home, I often get Ciudad
Universitaria, but sometimes Coyoacán or even San Ángel Inn.
In Ubuntu Touch / Lomiri, Maciej Sopyło has updated Lomiri's Weather App to operate against a different weather forecast provider (Open Meteo). Additionally, the new implementation is generic and pluggable, so other weather data providers can be added-in later.
Big thanks to Maciej for working on this just in time (the previous implementation's API has recently been EOL'ed and is not available anymore to Ubuntu Touch / Lomiri users).
Lomiri Weather App - new Meteorological Terms part of the App now
While the old weather data provider implementation obtained all the meteorological information as already localized strings from the provider, the new implementation requires all sorts of weather conditions being translated within the Lomiri Weather App itself.
The meteorological terms are probably not easy to translate for the usual software translator, so special help might be required here.
Call for Translations: Lomiri Weather App
So, if you feel entitled to help here, please join the Hosted Weblate service [1] and start working on Lomiri Weather App.
I just got one of those utterly funny spam messages… And yes, I recognize
everybody likes building a name for themselves. But some spammers are downright
silly.
I just got the following mail:
From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously 😉
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>
This message has visual elements included. If they don't display, please
update your email preferences.
*Dear Esteemed Author,*
Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence
in peer-reviewed academic research spanning various fields, our journal is
committed to promoting innovative ideas and driving advancements in
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of
*1.000* and is highly respected in prestigious databases including the
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,
Elsevier BIOBASE, and BIOSIS Previews.
*Wulfenia submission page:*
[image: research--check.png][image: scrutiny-table-chat.png][image:
exchange-check.png][image: interaction.png]
.
Please don't reply to this email
We sincerely value your consideration of 'WULFENIA' as a platform to
present your scholarly work. We eagerly anticipate receiving your valuable
contributions.
*Best regards,*
Professor Dr. Vienna S. Franz
Who cares what Wulfenia is about? It’s about you, my stupid Wolf cousin!
I do a lot in free software for ham radio, and Steve at Zero Retries encouraged me to take this email I sent him and translate it into something here.
UK Packet Radio Network UKPRN is going nicely, with the Nottingham and South segment really quite impressively interconnected over RF - https://nodes.ukpacketradio.network/packet-network-map.html?rfonly=1 I’m excited to see the growth down there!
We’re sorting out forwarding and routes in Aberdeen too, and working to grow the RF path to Inverness.
This project inspired me to investigate whether
git.sesse.net could start accepting patches
in a format that was less friction than email, and didn't depend on
custom SSH-facing code written by others. And it seems it really can!
The thought was to simply allow git push from anyone, but that git push
doesn't actually push anything; it just creates a pull request (by email).
It was much simpler than I'd thought. First make an empty hooks directory
with this pre-receive hook (make sure it is readable by your web server,
and marked as executable):
#! /bin/bash
set -e
read oldsha newsha refname
git send-email --to=steinar+git@gunderson.no --suppress-cc=all --subject-prefix="git-anon-push PATCH" --quiet $oldsha..$newsha
echo ''
echo 'Thank you for your contribution! The patch has been sent by email and will be examined for inclusion.'
echo 'The push will now exit with an error. No commits have actually been pushed.'
exit 1
Now we can activate this hook and anonymous push in each project
(I already run git-http-backend on the server for pulling, and it
supports just fine if you tell it to), and give www-data write
permissions to store the pushed objects temporarily:
And now any attempts to git push will send me patch emails
that I can review and optionally include!
It's not perfect. For instance, it doesn't support multipush,
and if you try to push to a branch that doesn't exist already,
will error out since $oldsha is all-zeros. And the From:
header is always www-data (but I didn't want to expose myself
to all sorts of weird injection attacks by trying to parse the
committer email). And of course, there's no spam control,
but if you want to spam me with email, then you could just like…
send email?
(I have backups, in case someone discovers some sort of evil
security hole.)
In two weeks DebConf24, the Debian conference starts in Busan, South
Korea. Therefore I've added support for the Korean language into the
web service of FAI:
podlators contains the Perl modules and scripts used to convert Perl's
documentation language, POD, to text and manual pages.
This is another small bug fix release that is part of iterating on getting
the new podlators incorproated into Perl core. The bug fixed in this
release was another build system bug I introduced in recent refactorings,
this time breaking the realclean target so that some generated scripts
were not removed. Thanks to James E Keenan for the report.
DocKnot is my static web site generator, with some additional features for
managing software releases.
This release fixes some bugs in the newly-added conversion of text to HTML
that were due to my still-incomplete refactoring of that code. It still
uses some global variables, and they were leaking between different
documents and breaking the formatting. It also fixes consistency problems
with how the style parameter in *.spin files was interpreted, and
fixes some incorrect docknot update-spin behavior.
Prior to arrival in Kenya, you need to apply for an Electronic Travel Authorization (eTA) on their website by uploading all the required documents. This system is in place since Jan 2024 after the country abolished the visa system, implementing the eTA portal. The required documents will depend on the purpose of your visit, which in my case, was to attend a conference.
Here is the list of documents I submitted for my eTA:
Scanned copy of my passport
Photograph with white background
Flight tickets (reservation)
Hotel bookings (reservation)
Invitation letter from the conference
Yellow Fever vaccination certificate (optional)
Job contract (optional)
“Reservation” means I didn’t book the flights and hotels, but rather reserved them. Additionally, “optional” means that those documents were not mandatory to submit, but I submitted them in the “Other Documents” section in order to support my application. After submitting the eTA, I had to make a payment of around 35 US Dollars (approximately 3000 Indian Rupees).
It took 40 hours for me to receive an email from Kenya stating that my eTA has been approved, along with an attached PDF, making this one of my smoothest experiences of obtaining travel documents to travel to a country :). An eTA is technically not a visa, but I put the word “visa” in the title due to familiarity with the term.
Blog Comments
The Akismet WordPress anti-spam plugin has changed it’s policy to not run on sites that have adverts which includes mine. Without it I get an average of about 1 spam comment per hour and the interface for removing spam takes more mouse actions than desired. For email spam it’s about the same volume half of which is messages with SpamAssassin scores high enough to go into the MaybeSpam folder (that I go through every few weeks) and half of which goes straight to my inbox. But fortunately moving spam to a folder where I can later use it to train Bayesian classification is a much faster option on PC and is also something I can do from my phone MUA.
As an experiment I have configured my blog to only take comments from registered users. It will be interesting to see how many spammers make it through that and to also see feedback from genuine people. People who can’t comment can tell me about it via the contact methods listed here [1].
I previously wrote about other ways of dealing with hostile comments [2]. Blogging seems to be less popular nowadays so a Planet specific forum doesn’t seem a viable option. It’s a pity, I think that YouTube and Facebook have taken over from blogs and that’s not a good thing.
Related posts:
22 July, 2024 10:05PM by etbe