October 03, 2024

<noscript> <img alt="" height="810" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1440,h_810/https://ubuntu.com/wp-content/uploads/6f9d/Cloud-Expo-Madrid-2024_Blog-1.png" width="1440" /> </noscript>

One of the leading global trade shows for tech business is opening its doors in Madrid on October, 16 – 17, 2024. We are happy to announce that Canonical will be there to help you choose the right cloud infrastructure that suits you best. 

Canonical, the publisher of Ubuntu, provides open source security, support and services. In addition to the OS, Canonical offers an integrated data and AI stack. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.

Join us at the booth K87 to explore infrastructure solutions for private cloud implementation and management, as well as workload orchestration in hybrid cloud and multi-cloud environments. 

Our team will also be happy to walk you through our Enterprise AI solutions supporting you in developing artificial intelligence projects in any environment. Building enterprise-grade AI projects is not an easy task, that’s why our experts in Madrid are here to help you speed up your AI journey. 

<noscript> <img alt="" height="627" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_627/https://ubuntu.com/wp-content/uploads/8725/MTS-come-and-visit-us.jpg" width="1200" /> </noscript>

Join us at the booth if you:

  • Have questions about AI, MLOps
  • Are interested in IT infrastructure solutions
  • Are looking for support in defining your MLOps architecture
  • Would like to learn more about Canonical and our solutions

Here is all the information about Cloud Expo Madrid 2024:

  • Location: IFEMA Madrid. Avenida del Partenón 5. Hall 9
  • Dates: October, 16 – 17, 2024
  • Hours: 9:30 AM – 7:00 PM
  • Are you interested in talking to our team today? Contact us now.
<noscript> <img alt="" height="810" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1440,h_810/https://ubuntu.com/wp-content/uploads/6f9d/Cloud-Expo-Madrid-2024_Blog-1.png" width="1440" /> </noscript>
on October 03, 2024 03:09 PM
<noscript> <img alt="" height="1011" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1805,h_1011/https://ubuntu.com/wp-content/uploads/e046/What_is_a_vectordatabase.png" width="1805" /> </noscript>

A vector database is a data storage system that organises information in the form of vectors, which are mathematical representations. These databases are designed to store, index, and query vector embeddings or numerical representations of unstructured data, including text documents, multimedia content, audio, geospatial coordinates, tables, and graphs. This setup enables fast retrieval and similarity searches, making it especially useful for efficiently managing and finding complex, high-dimensional data that is difficult to query using traditional methods.

Artificial intelligence and machine learning continue to become more widely adopted, making vector databases increasingly vital as the AI industry reaches new heights of interest and innovation. Large language models and generative AI have fueled the rise of vector databases by efficiently handling the complexity of unstructured data, such as text, images, and videos. This is because, unlike traditional relational databases, which organise structured data into rows and columns, these systems excel at managing this type of unconventional data.

In fact, to address this challenge, vector databases convert unstructured data into vector embeddings—numerical representations that preserve the data’s relational context and semantic properties. Beyond revolutionising data management and storage, vector databases play a crucial role in enhancing the understanding and contextualisation of information, a core capability of artificial intelligence models.

The recent surge in investment in this area highlights the critical role vector databases play in modern applications. They offer high speed and performance through advanced indexing techniques while supporting horizontal scalability and handling large volumes of unstructured data. They are a cost-effective solution compared to training genAI models from scratch, reducing costs and inference time. This is because a vector database can recognise similarities between data points (for example a pen and a pencil) and as such they will enable rapid prototyping of GenAI application boosting accuracy and reducing hallucinations through prompt augmentation. These are all tasks which can be mostly automated through a vector database and which would require a number of lengthy steps otherwise.

Furthermore, they are flexible, suitable for various types of multidimensional data and different use cases, such as semantic search and conversational AI applications and are particularly valuable for real-time applications like personalised content recommendations on social networks or e-commerce platforms. Finally, they improve the output of AI models, such as LLMs, and simplify the management of new data over time.

How do they work? 

The idea behind the operation of vector databases is that while a conventional database is optimised for storing and querying tabular data consisting of strings, numbers and other scalar data, vector databases are optimised for operating on vector-type data. Therefore, query execution on a vector database differs from query execution on a conventional database. Instead of searching for exact matches between identical vectors, a vector database uses similarity search to locate vectors that reside in the vicinity of the given query vector within the multidimensional space. 

Hence in traditional databases, we usually search for rows in the database where the value of a given field exactly matches the filters in our query. In contrast, in vector databases, on the other hand, we apply a similarity metric to find a vector that is as similar as possible to our query. This approach aligns more closely with the intrinsic nature of data and offers a speed and efficiency that traditional research cannot match.

To perform a similarity search, vector databases use advanced indexing techniques, such as approximate neighbour search (ANN), hierarchical small navigable world (HNSW), Product Quantization (PQ) and Locality-sensitive hashing (LSH), in order to optimise performance and ensure low latency during search operations. Vector indexing is crucial to manage and retrieve high-dimensional vectors efficiently.

An example of ANN query is the  ‘k nearest neighbours’ (k-NN) query. In this case, vectors represent points in N-dimensional spaces, effectively describing the selected data set through mathematical objects. Using low-latency queries, through a k-NN search, we will be able to cluster the data set into k different groups, meaning that we achieve the maximum possible similarity in neighbouring data points. 

Here’s a common pipeline for a vector database:

<noscript> <img alt="" height="305" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1708,h_305/https://ubuntu.com/wp-content/uploads/1d97/image.png" width="1708" /> </noscript>

Figure 1: Pipeline of a vector database

  1. Indexing: After transforming the raw data into embeddings, the vector database organises vectors using algorithms such as ANN. This process maps the vectors to a data structure designed to facilitate functions such as search and retrieval.
  2. Querying: The vector database compares the query vector against the vectors in the dataset, using a similarity search with a predefined user-defined metric. This will allow us to find the nearest neighbours to the query vector, maximising the similarities.
  3. Post-Processing: In some instances, the vector database retrieves the nearest neighbours from the dataset and applies post-processing to generate the final results. This step may involve relabelling the nearest neighbours with an alternative similarity measure.

The diagram below shows a more in-depth view of the function of  the indexing and querying phases:

<noscript> <img alt="" height="1193" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1730,h_1193/https://ubuntu.com/wp-content/uploads/cf6d/Vector-Database-2-A.jpg" width="1730" /> </noscript>

Figure 2: Storing path diagram for vector databases

In particular, the indexing phase begins with selecting a machine learning model suitable for generating vector embeddings based on the type of data we are working with, such as text, images, audio, or tabular data. Once the appropriate model is chosen, data will be converted into embeddings, or vectors, by processing it through the embedding model. Along with these vector representations, relevant metadata will be saved as it can be used later to filter search results during similarity searches. The vector database will then index the vector embeddings and metadata separately, using various indexing methods like ANN. Finally, the vector data will be stored alongside these indexes and the associated metadata, enabling efficient retrieval and querying.

<noscript> <img alt="" height="1328" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1730,h_1328/https://ubuntu.com/wp-content/uploads/ffaf/Vector-Database-2-B.jpg" width="1730" /> </noscript>

Figure 3: Query path diagrams for vector databases

The querying phase, which consists of running queries in a vector database, is usually made up of two parts: the first is the input of the data that needs to be matched, like a picture which is compared to others (input), and the second one is a metadata filter to exclude results with certain known traits, like leaving out images of dresses in a specific colour.  This filter can be applied either before or after a similarity search. The data is processed using the same model that was used to store it in the database, and then the search retrieves similar results based on how closely they match the original data.

LLM use case: Using OpenSearch vector capabilities

Charmed OpenSearch, an open source OpenSearch operator, provides vector database functionality through an enabled k-NN plugin, enhancing conversational applications with essential features like fault tolerance, access controls, and a powerful query engine. This makes Charmed OpenSearch an ideal tool for applications like Retrieval Augmented Generation (RAG), which ensures that conversational applications generate accurate results with contextual relevance and domain-specific knowledge, even in areas where the relevant facts were not originally part of the training dataset.

A practical example of using Charmed OpenSearch in the RAG process involves using it as a retrieval tool in an experiment using a Jupyter notebook on top of Charmed Kubeflow to infer an LLM.

<noscript> <img alt="" height="978" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1376,h_978/https://ubuntu.com/wp-content/uploads/d7cf/image.png" width="1376" /> </noscript>

Figure 5: RAG workflow with open source tooling

Follow the tutorial here>>
LLMs Retrieval Augmented Generation (RAG) using Charmed OpenSearch

Summary

Vector databases have become increasingly important as AI applications in fields like natural language processing, computer vision, and automated speech recognition. Unlike traditional scalar-based databases, vector databases are specifically designed to handle the unique challenges associated with managing these embeddings in production environments, offering distinct advantages over both conventional databases and standalone vector indexes.

To provide you with a deeper understanding of how vector databases work, we’ve explored the core elements of a vector database, including its operational mechanics, the algorithms it employs, and the additional features that make it robust enough for production use. 

Canonical for your vector database needs

on October 03, 2024 02:56 PM

E318 Impressões Digitais.

Podcast Ubuntu Portugal

Curiosamente, esta semana não falámos muito sobre Ubuntu - mas estivemos muito entretidos a elogiar o Internet Archive, a debater as motivações questionáveis de quem o ataca a propósito de DRM; comentámos as notícias sensacionalistas que se apanham pelas redes a propósito da Mozilla e Firefox; como é bom usar contentores; processos judiciais sobre privacidade e direitos dos utilizadores; pânico nas redes sobre vulnerabilidades em CUPS; a histeria geral porque se acabou o café e ainda deixámos trabalhos de casa com GUIX e ideias para navegadores de internet alternativos.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on October 03, 2024 12:00 AM

October 01, 2024

Ubuntu Pro is a premium version of the Ubuntu operating system offered by Canonical. It is designed to provide additional support, enhanced security, and advanced features for enterprises and organizations that require a more stable and secure environment. Here’s an overview of the key features that make Ubuntu Pro a standout choice:

Key Features of Ubuntu Pro

  1. Long-Term Support (LTS): Ubuntu Pro offers extended support for up to 10 years for each version, giving businesses ample time to plan and execute their migration strategies. This ensures a more secure and stable system, reducing the risk of vulnerabilities.
  2. Easy Installation and Configuration: Ubuntu Pro simplifies the installation process and includes advanced configuration tools, making it easier to integrate into existing IT environments. Whether you’re deploying it on servers or workstations, the setup is seamless.
  3. Enhanced Security: One of the key benefits of Ubuntu Pro is access to more security updates and patches. This is crucial for protecting your systems from the latest threats. It includes deeper risk assessment and monitoring capabilities to safeguard your infrastructure.
  4. Support for Both Public and Private Infrastructure: Ubuntu Pro is designed to work well across both public and private infrastructures, making it a great option for organizations using cloud servers and applications. It integrates easily with various cloud platforms, ensuring flexibility in deployment.
  5. Additional Features: Users of Ubuntu Pro gain access to extra features like Livepatch, which allows for kernel updates without requiring a system reboot. This minimizes downtime and enhances system reliability. Ubuntu Pro also includes advanced monitoring and management tools to help keep your systems running smoothly.
  6. Community and Enterprise Support: Canonical provides dedicated customer support for Ubuntu Pro users, as well as access to the Ubuntu community for advice and troubleshooting. This combination of professional and community support ensures you have the help you need, when you need it.
  7. Versatile Usage: Whether you’re running servers, desktops, or IoT devices, Ubuntu Pro is designed to be versatile. Its compatibility across multiple platforms makes it a solid choice for a wide range of use cases, from cloud infrastructure to edge computing.

Free Subscription for Ubuntu Pro

Ubuntu Pro offers a free personal subscription, which includes:

  • Support for up to 5 machines for individuals or any business they own.
  • Support for up to 50 machines for active Ubuntu Community members.

This free subscription allows users to take full advantage of Ubuntu Pro’s extended features and support, making it a great option for individuals and small businesses, as well as dedicated contributors to the Ubuntu community.

Why Choose Ubuntu Pro?

With its extended support, enhanced security, and advanced features, Ubuntu Pro is an excellent choice for organizations that need a reliable, secure, and scalable operating system. Whether you’re managing a large infrastructure or a smaller setup, Ubuntu Pro offers the tools and support you need to keep your systems running smoothly.

If you’re interested in learning more about Ubuntu Pro or have specific questions, don’t hesitate to explore further or reach out to the Ubuntu community.

Embrace the power of Ubuntu Pro and ensure your systems are ready for the future!

For more info, visit https://ubuntu.com/pro

The post Ubuntu Pro: A Premium Offering for Enhanced Security and Support appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on October 01, 2024 03:59 PM

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Pydantic

My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I’ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo Röhling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck.

Learning Rust is on my to-do list, but merely not knowing a language hasn’t stopped me before. So I learned how the Debian Rust team’s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing.

I’m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie!

OpenSSH

I upgraded the Debian packaging to OpenSSH 9.9p1.

YubiHSM

I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions.

I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I’d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests.

I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly.

Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed.

As usual, bookworm-backports is up to date with all these changes.

Python team

setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn’t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid.

As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits.

I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted.

buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven’t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3.

Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway.

On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request.

I upgraded some embedded CSS files in nbconvert.

I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.

debmirror

The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

on October 01, 2024 01:19 PM

September 30, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 859 for the week of September 22 – 28, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meeting Activity Reports
  • Rocks Public Journal
  • LXD: Weekly news #364
  • LoCo Events
  • Oracular Oriole (24.10) Release Status Tracking
  • CUPS Remote Code Execution Vulnerability Fix Available
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Cristovao Cordeiro (cjdc)
  • Din Mušić
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on September 30, 2024 10:39 PM

September 29, 2024

A networking guide for Incus

Simos Xenitellis

Incus is a hypervisor/manager for virtual machines and application/system containers. Get community support here.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. The system container follows the lifecycle of a computer system. You can think of system containers as software virtual machines.

An application container is a container that has an application or service. It follows the lifecycle of the application instead of a system. That is, here you start and stop the application instead of booting and shutting down a system. Incus supports Open Container Initiative (OCI) images such as Docker images. When Incus launches an OCI image, it uses its own runtime, not Docker’s. That is, Incus consumes images from any OCI image repositories.

In virtual machines and system/application containers we can attach virtual networking devices, either

  • none, (i.e. an instance without networking)
  • one or, (i.e. most common and simple case)
  • more than one.

In addition to the virtual networking devices, we can also attach real hardware networking devices. Those devices can be taken away from the host and get pushed into a virtual machine or system container.

You may use a combination of those networking devices in the same instance. It is left as an exercise to the reader to explore that road. In these tutorials we are look at one at most networking device per instance.

There will be attempts to generalize and explain in practical terms. If I get something wrong, please correct me in the comments so that it gets fixed and we all learn something new. Note that I will be editing this content along the way, adding material, troubleshooting cases, etc.

In this post we are listing tutorials of the different Incus devices of type nic (network interface controller). Whatever we write in this post and the linked tutorials, are covered in that documentation URL!

The list of tutorials per networking:

  1. bridge (the default, the local network bridge), it’s in this post below.
  2. bridged, (pending)
  3. macvlan, (pending)
  4. none,
  5. physical,
  6. ipvlan,
  7. routed,

The setup

When demonstrating these network configurations, we will be using an Incus VM. When learning, try there in your Incus VM before applying on your host or your server.

We launch an Incus VM, called tutorial, with Ubuntu 24.04 LTS, then get a shell with the default non-root account ubuntu. I am impatient and I am typing repeatedly the incus exec command to get a shell. The VM takes a few moments to boot up, and I get interested error messages until the VM is actually running. Not really relevant to this tutorial but you will get educated at every opportunity.

$ incus launch images:ubuntu/24.04/cloud tutorial --vm
Launching tutorial
$ incus exec tutorial -- su -l ubuntu
Error: VM agent isn't currently running
$ incus exec tutorial -- su -l ubuntu
su: user ubuntu does not exist or the user entry does not contain all the required fields
$ incus exec tutorial -- su -l ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@tutorial:~$ 

We got a shell in the VM. Then, install Incus which is available in the default repositories of Ubuntu 24.04 LTS. Also, we install zfsutils-linux, which are the client utilities to use ZFS in Incus. We are advised to add our non-root account to the incus-admin group in order to have access to Incus. Without that, we would have to use sudo all the time. When you add a user to a group, you need to logout then log in again for the change to take effect. And this is what we do (unless you know about newgrp).

ubuntu@tutorial:~$ sudo apt install -y incus zfsutils-linux
...
Creating group 'incus' with GID 989.
Creating group 'incus-admin' with GID 988.
Created symlink /etc/systemd/system/multi-user.target.wants/incus-startup.service → /usr/lib/systemd/system/incus-startup.service.
Created symlink /etc/systemd/system/sockets.target.wants/incus-user.socket → /usr/lib/systemd/system/incus-user.socket.
Created symlink /etc/systemd/system/sockets.target.wants/incus.socket → /usr/lib/systemd/system/incus.socket.
incus.service is a disabled or a static unit, not starting it.
incus-user.service is a disabled or a static unit, not starting it.

Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.

...
ubuntu@tutorial:~$ sudo usermod -a -G incus-admin ubuntu
ubuntu@tutorial:~$ logout
$ incus exec tutorial -- su -l ubuntu
ubuntu@tutorial:~$ 

Now we initialize Incus with sudo incus admin init.

Default Incus networking

When you install and setup Incus with incus admin init, you are prompted whether you want to create a local network bridge. We press Enter to all prompts, which means that we accept all the defaults that are presented to us. The last question is whether to show the initialization configuration. If you missed it, you can get it after the fact by running incus admin init --dump (dumps the configuration).

ubuntu@tutorial:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (zfs, dir) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 5GiB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

ubuntu@tutorial:~$

If you accept the defaults (i.e. press Enter in each) or type them explicitly, you get a local bridge named incusbr0 that is managed by Incus, and gives private IPv4 and IPv6 IP addresses to your newly created instances.

Let’s see them in practice in your Incus installation. You have configured Incus and Incus created a default profile, called default, for you. This profile is applied by default to all newly created instances and has the networking configuration in there. In that profile there are two devices, and one of them is the networking device. In Incus the device is called eth0 (in pink color), and in the instance it will be shown as eth0 (green color). On the host, the bridge will appear with the name incusbr0. It’s a networking type, hence of type nic.

ubuntu@tutorial:~$ incus profile list
+---------+-----------------------+---------+
|  NAME   |      DESCRIPTION      | USED BY |
+---------+-----------------------+---------+
| default | Default Incus profile | 0       |
+---------+-----------------------+---------+
ubuntu@tutorial:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []
ubuntu@tutorial:~$ 

incusbr0 was created by Incus. Let’s see details through the incus network commands. We first list the network interfaces and then we show the incusbr0 network interface. incusbr0 is a managed network interface (in pink below), and it’s managed by Incus. Incus takes care of the networking and provides DHCP services, and access to the upstream network (i.e. the Internet). incusbr0 is a network bridge (in blue). An instance that requires network configuration from incusbr0, will get an IP address from the range 10.180.234.1-254 (in orange). Network Address Translation (NAT) is enabled (also in orange), which means there is access to the upstream network, and likely the Internet.

ubuntu@tutorial:~$ incus network list
+----------+----------+---------+-----------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |      IPV4       | USED BY |  STATE  |    
+----------+----------+---------+-----------------+---------+---------+
| enp5s0   | physical | NO      |                 | 0       |         |    
+----------+----------+---------+-----------------+---------+---------+
| incusbr0 | bridge   | YES     | 10.180.234.1/24 | 1       | CREATED |
+----------+----------+---------+-----------------+---------+---------+
ubuntu@tutorial:~$ incus network show incusbr0
config:
  ipv4.address: 10.180.234.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:7:7dfe:75cf::1/64
  ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

Let’s launch a container and test these out. The instance got an IP address (in orange) that is within the range of the network bridge above.

ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine
Launching myalpine
ubuntu@tutorial:~$ incus list -c ns4t     
+----------+---------+----------------------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$ 

The IP address is OK but could it look better? It’s private anyway, and we can select anything from the range 10.x.y.z. Let’s change it so that it uses instead 10.10.10.1-254. We set the configuration of incusbr0 for ipv4.address (see earlier) to a new value, 10.10.10.1/24. Each number separated by commas is 8 bits in length, and /24 means that the first 3 * 8 = 24 bits should stay the same. We make the change, but the instance still has the old IP address. We restart the instance, and it automatically gets the new IP address from the new range.

ubuntu@tutorial:~$ incus network set incusbr0 ipv4.address=10.10.10.1/24
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+----------------------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$ incus restart myalpine
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+--------------------+-----------+
|   NAME   |  STATE  |        IPV4        |   TYPE    |
+----------+---------+--------------------+-----------+
| myalpine | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+----------+---------+--------------------+-----------+
ubuntu@tutorial:~$ 

We have created incusbr0. Are we allowed to create another private bridge? Sure we are. We will call it incusbr1, and also we disable IPv6 networking. IPv6 addresses are too wide and mess up the formatting on my blog. If you notice earlier, there were no IPv6 addresses although IPv6 was configured on incusbr0. I cheated and removed the IPv6 addresses in some command outputs.

ubuntu@tutorial:~$ incus network create incusbr1 ipv4.address=10.10.20.1/24 ipv6.address=none
Network incusbr1 created
ubuntu@tutorial:~$ incus network show incusbr1
config:
  ipv4.address: 10.10.20.1/24
  ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by: []
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

We have created incusbr1. Can we now launch an instance onto that private bridge? We launch the instance called myalpine1 and we used the incus launch parameter --network incusbr1 to specify a different network than the default network in the default Incus profile. We verify below that myalpine1 is served by incusbr1 (in green).

ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine1 --network incusbr1
Launching myalpine1
ubuntu@tutorial:~$ incus list -c ns4t
+-----------+---------+--------------------+-----------+
|   NAME    |  STATE  |        IPV4        |   TYPE    |
+-----------+---------+--------------------+-----------+
| myalpine  | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
| myalpine1 | RUNNING | 10.10.20.85 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
ubuntu@tutorial:~$ incus network show incusbr1
config:
  ipv4.address: 10.10.20.1/24
  ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by:
- /1.0/instances/myalpine1
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

Technical details

The instances that use the Incus private bridge have access to the Internet. How is this achieved? It’s achieve with either iptables or nftables rules. In recent versions of Linux distributions, you would be using nftables by default (command: nft, no relation to NFTs). To view the firewall ruleset that were created by Incus, run sudo nft list ruleset. Here is my ruleset and should be similar to yours. There is one table for Incus and four chains. A persistent, a forward, an in and an out. More at the documentation site at nftables.

ubuntu@tutorial:~$ sudo nft list ruleset
table inet incus {
	chain pstrt.incusbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.57.39.0/24 ip daddr != 10.57.39.0/24 masquerade
		ip6 saddr fd42:e7b:739c:7117::/64 ip6 daddr != fd42:e7b:739c:7117::/64 masquerade
	}

	chain fwd.incusbr0 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "incusbr0" accept
		ip version 4 iifname "incusbr0" accept
		ip6 version 6 oifname "incusbr0" accept
		ip6 version 6 iifname "incusbr0" accept
	}

	chain in.incusbr0 {
		type filter hook input priority filter; policy accept;
		iifname "incusbr0" tcp dport 53 accept
		iifname "incusbr0" udp dport 53 accept
		iifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "incusbr0" udp dport 67 accept
		iifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "incusbr0" udp dport 547 accept
	}

	chain out.incusbr0 {
		type filter hook output priority filter; policy accept;
		oifname "incusbr0" tcp sport 53 accept
		oifname "incusbr0" udp sport 53 accept
		oifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		oifname "incusbr0" udp sport 67 accept
		oifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		oifname "incusbr0" udp sport 547 accept
	}
}
ubuntu@tutorial:~$ 

Future considerations

  1. Network isolation.

on September 29, 2024 02:12 PM

September 28, 2024

In the world of operating systems, both the GNU Hurd and the Linux kernel represent distinct philosophies and technical approaches. While both share a foundation rooted in the Free Software movement, their paths have diverged significantly over time. Let’s explore the key differences between them and how Linux, in particular, has grown to dominate a vast range of computing environments — including some exciting options for ham radio operators!


GNU Hurd: The Dream of a Microkernel

The GNU Hurd was the original vision of the Free Software Foundation (FSF), initiated by Richard Stallman as part of the GNU Project in 1990. The idea was to create a fully free operating system where the Hurd would serve as the kernel. It utilizes a microkernel architecture, meaning that core functions like memory management, file systems, and device drivers are managed in user-space processes called servers, rather than within the kernel itself. The microkernel, Mach, handles only the most essential functions like task scheduling and inter-process communication (IPC).

This approach promises a flexible, modular design, making it easier to maintain and modify. If one component fails, the system theoretically can recover more gracefully since the failure is isolated. However, this modularity has come at the cost of complexity and performance challenges, making Hurd notoriously difficult to develop. As a result, GNU Hurd remains largely an experimental project, with few practical deployments outside academic interest.

Key features of GNU Hurd:

  • Microkernel Design: Separation of core services into user-space servers.
  • Modularity: Theoretically more secure and fault-tolerant, but challenging to implement.
  • Freedom and Flexibility: In alignment with the GNU philosophy, designed for ultimate user control over the system.

Unfortunately, despite its potential, the slow development of Hurd has kept it from achieving widespread use, especially when compared to Linux.


Linux Kernel: From a Student Project to Global Dominance

At nearly the same time that Hurd began development, a Finnish student named Linus Torvalds started work on what would become the Linux kernel in 1991. Unlike Hurd, Linux took a monolithic kernel approach, meaning that most of the core system functionality (device drivers, memory management, file systems, networking) runs directly within the kernel space. This design has proven to be both efficient and performant, allowing Linux to quickly gain traction as a robust, stable, and high-performance kernel.

Though Linux was not initially tied to the GNU Project, it rapidly became the kernel of choice for the broader GNU/Linux system, pairing GNU software with the Linux kernel. Today, Linux is the foundation of countless operating systems used across various domains, from personal computers to embedded systems, mobile devices, supercomputers, and even space missions.

Key characteristics of Linux:

  • Monolithic Design: Core services run within the kernel, leading to better performance.
  • Modularity: Despite being monolithic, Linux supports dynamically loadable modules, giving flexibility to add or remove kernel functionality without rebooting.
  • Massive Hardware Support: Thanks to broad community and corporate backing, Linux supports a huge variety of hardware platforms.
  • Fast Development: Linux has a highly active community, including contributions from individuals, organizations, and major corporations like Google, IBM, and Red Hat.

The Linux kernel’s rapid development, stability, and wide hardware support have helped it become the dominant force in open-source operating systems. It powers everything from web servers and cloud infrastructure to IoT devices and smartphones (via Android).


Linux for Ham Radio Operators

For radio amateurs (ham radio enthusiasts), the flexibility of Linux has opened the door to powerful tools for digital communication and signal processing. Several Linux distributions are specifically tailored to the needs of the ham radio community, offering ready-to-use setups with pre-installed software for operating digital modes, logging contacts, controlling radios, and even experimenting with SDR (Software Defined Radio).

Here are some Linux distributions popular among ham radio operators:

  • Ham Radio Pure Blend (Debian): A specialized flavor of Debian Linux that includes a collection of ham radio applications for digital modes (like FT8 and PSK31), logging, and radio transceiver control. It’s a great starting point for those already familiar with Debian’s ecosystem.
  • Skywave Linux: Built for SDR enthusiasts, Skywave Linux comes pre-configured with software to receive and decode signals from around the world. It includes tools like Gqrx and CubicSDR, making it ideal for listening to shortwave broadcasts, weather satellite transmissions, and more.
  • Pi-Star: Designed for Raspberry Pi, Pi-Star is popular in the ham radio community for digital voice communications, supporting modes like DMR, D-Star, and C4FM. It’s a lightweight and easy-to-use system for setting up digital repeaters or hotspots.

Each of these distributions provides ham operators with powerful tools to enhance their radio experiences, whether it’s for logging contacts, experimenting with new digital modes, or setting up communication infrastructure.


Conclusion: Two Roads, One Community

While GNU Hurd remains an ambitious but incomplete project, Linux has become a cornerstone of the global open-source ecosystem. Its monolithic design, performance, and flexibility have enabled it to thrive in a vast range of environments, from everyday desktop use to specialized fields like ham radio. For operators and hobbyists in the ham radio world, Linux’s adaptability has led to the creation of several dedicated distributions, making it an essential tool for modern amateur radio enthusiasts.

Have you tried using any of these Linux distributions for ham radio? Or maybe you’ve experimented with GNU Hurd? Share your experiences with us in the comments!


GNU #Linux #HamRadio #OpenSource #TechHistory #SDR #AmateurRadio #DigitalModes #PiStar #Debian

The post GNU Hurd vs. Linux Kernel: Two Paths in Free Software – Plus Linux Distributions for Ham Radio Enthusiasts appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on September 28, 2024 06:54 AM

September 26, 2024

E317 Paletes De Pestanas À Bruta.

Podcast Ubuntu Portugal

Voltámos cheio de notícias e novidades da Comunidade e não só: haverá notificações e novas permissões para os pacotes Snap; saiu a beta de 24.10 para testar; há um novo centro de segurança, a ANSOL chama voluntários para a Festa do Software Livre; acontece a Semana do Software Livre no Brasil e claro - tivemos tempo para o disparate: o Diogo está perto de entrar para o Livro Guiness de Recordes com o seu número de abas no Firefox e o Miguel marrou com scripts de Python, fazendo jus ao ditado «quando se pensa como um martelo, todos os problemas parecem um prego»!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 26, 2024 12:00 AM

September 24, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 858 for the week of September 15 – 21, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 24.10 (Oracular Oriole) Beta released
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meeting Activity Reports
  • Ubuntu Flavor sync meeting notes: 9 September 2024
  • UbuCon Asia 2024 Team meeting 2024-09-15 12:00 UTC
  • Ubuntu Home Server Workshop 2024 @Busan
  • Ubucon Portugal 2024 needs you!
  • LoCo Events
  • Mir release 2.18.0
  • Call for testing: ubuntu-frame, mir-test-tools on the `22` track (Mir 2.17.2 update)
  • Ubuntu Desktop’s 24.10 Dev Cycle – Part 6: September Update
  • Other Community News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • dinmusic
  • Yeonguk Choo
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on September 24, 2024 01:22 AM

September 21, 2024


The beta of Kubuntu Oracular Oriole (to become 24.10 in October) has now been released, and is available for download.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of Kubuntu Mantic Minotaur are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

We STRONGLY advise testers to read the Kubuntu 24.10 Beta release notes before installing, and in particular the section on ‘Known issues‘.

You can also find more information about the entire 24.10 release (base, kernel, graphics etc) in the main Ubuntu Beta release notes and announcement.



To enable Flatpaks in KDE’s Discover in Kubuntu 24.10, run this command:

sudo apt install flatpak plasma-discover-backend-flatpak


To enable the largest Flatpak repository, run this command:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo


Log out and log back in (or restart) to re-initialize the XDG_DATA_DIRS variable, otherwise, newly installed Flatpak apps will not run or appear in the startup menu.

on September 21, 2024 10:38 PM

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 24.10, codenamed “Oracular Oriole”.

While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 24.10 is released on October 10, 2024.

Special Notes

The Ubuntu Studio 24.10 image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.10/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

New Features This Release

  • Plasma 6.1 is now the default desktop environment, an upgrade from Plasma 5.27. This may have some unknown bugs that we’re ironing out as we go along, along with theming.
  • Ubuntu’s Generic Kernel is now capable of the same low latency processing as Ubuntu’s lowlatency kernel when certain boot parameters are used. Additionally, the lowlatency kernel is eventually going to be deprecated. With this in mind, we have switched to the generic kernel with the low latency boot parameters enabled by default. These boot parameters can be tweaked in Ubuntu Studio Audio Configuation.
  • Minimal Install Option for new installations. This allows users to install Ubuntu Studio and customize what they need later with Ubuntu Studio Installer.
  • Orchis is now our default theme, which replaces Materia, our default theme since 19.04. Materia has stopped development, so we decided to
  • PipeWire continues to improve with every release and now includes FFADO support. Version 1.2.3
  • Ubuntu Studio Installer‘s included Ubuntu Studio Audio Configuration utility for fine-tuning the PipeWire setup now includes the ability to create or remove a dummy audio input device. Version 1.30
  • The legacy PulseAudio/JACK has been deprecated and discontinued, is no longer supported, and is no longer an option to use. Going forward, PipeWire or JACK are the only options. PipeWire’s JACK integration can be disabled from Ubuntu Studio Audio Configuration to use JACK by itself with QJackCtl, or via other means.

Major Package Upgrades

  • Ardour version 8.6.0
  • Qtractor version 1.1
  • OBS Studio version 30.2.3
  • Audacity version 3.6.1
  • digiKam version 8.4.0
  • Kdenlive version 24.08.1
  • Krita version 5.2.3

There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.

Known Issues

  • Due to the transition to Plasma 6 and Qt6, there may be some theming inconsistencies, especially for those upgrading. To work around these issues, reapply the default theme using System Settings and select “Orchis-dark” from Kvantum Manager.
  • Some graphics cards might find the transparency in the Orchis theme difficult to work with. For that reason, you can switch to “Orchis-dark-solid” in the Kvantum Manager. Feedback is welcome, and if the transparency becomes too burdensome, we can switch to the solid theme by default.
  • The new minimal install mode will not load the desktop properly with the extra icons (gimp, krita, patchance, etc.) in the top bar, so those had to be removed by default. If you find them useful, you can add them by right-clicking in the menu and clicking “Pin to Task Manager”. We apologize for the inconvenience.

Official Ubuntu Studio release notes can be found at https://ubuntustudio.org/ubuntu-studio-24-10-release-notes/

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/OracularOriole/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://discourse.ubuntu.com/t/oracular-oriole-release-notes/44878

How You Can Help

Please test using the test cases on https://iso.qa.ubuntu.com. All you need is a Launchpad account to get started.

Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. Go here to see how you can contribute financially (options are also in the sidebar).

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird is also a snap this cycle in order for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Also, to keep theming consistent, all included themes are snapped in addition to the included .deb versions so that snaps stay consistent with out themes.

We are working with Canonical to make sure that the quality of snaps goes up with each release, so we please ask that you give snaps a chance instead of writing them off completely.

Q: If I install this Beta release, will I have to reinstall when the final release comes out?
A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: We now include a minimal install option. Install using the minimal install option, then use Ubuntu Studio Installer to install what you need for your very own content creation studio.

on September 21, 2024 12:02 AM

September 17, 2024

My Chair

Benjamin Mako Hill

I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.

on September 17, 2024 10:11 PM

Recently I have been most baffled by how on my iPhone, my iCloud backup was over 5GB (and therefore would not back up into the free space that I have) despite how I had turned off most of the apps that might want to be included in the backup.

There are many, many, many posts on the internet from people having this problem, and there are a few common things which come up. The first one is that iMessage is included in the backup, and this includes any images or videos you've received or sent. The second is that your photos are included. So if you're thinking "hey there's hardly anything on my phone, why is the iCloud backup so big" but you've got 2 years worth of chats with a zillion people full of videos... that's why.

I, however, had tried all that and I still couldn't get the backup size down. I even spoke to Apple suport directly about it, three times; each of the people I spoke to had helpful suggestions, but it was also equally clear that each of them was fishing around in the dark, because all the "usual" tricks and traps they knew about which caused this were things that I'd already dealt with or disabled. It all ended up inconclusive, and I still didn't have a backup.

Today, in desperation, I decided to try backing up the phone to my Linux machine so I could poke about in the backup to see if I could tell what was using all the space. Now, iPhones can be backed up to Macs (not surprisingly) and Windows, but there's no official provision for doing so on Linux, sadly. However, there is libimobiledevice, a set of command line tools. I used them to back up my phone to my desktop as follows:

  1. plug the phone in
  2. idevicebackup2 cloud off # disable iCloud backup
  3. idevicebackup2 -i encryption on # enter a password
  4. idevicebackup2 backup ./ # back up phone into current dir

Once I'd done that, I had a folder named for the UDID of my phone, filled with encrypted data. Fortunately, there is a Python library called iOSbackup which knows how to read and decrypt these backups, so I used it to write myself a little equivalent of the du utility to see which folders in the backup might be unexpectedly using a lot more storage than I expected.

And in fact there were a whole bunch of folders called something like Library/WebClips/(long string).webclip/ which were using tons of storage, some over a gigabyte. I immediately thought: hey, I bet they're Home Screen web apps. I use a lot of these -- if there's a PWA for a service, I'll use it rather than a platform-specific app. We set up Open Web Advocacy for a reason, after all. So this made me jump to a (what turned out to be correct) conclusion from a standing start. Each of these Library/WebClips/blah.webclip folders contains an ApplicationManifest file, and you can get iOSbackup to disgorge its decrypted content; it's a "binary plist" (which Python knows how to read) and with that I could see which Home Screen web apps were taking up space with this little script:

from iOSbackup import iOSbackup
import plistlib
import os
import json

UDID = "ENTER YOUR UDID HERE (the backup folder name)"
PASSWORD = "BACKUP ENCRYPTION PASSWORD"
FOLDER = "2024-09-17" # folder you put the backup in
b = iOSbackup(udid=UDID, cleartextpassword=PASSWORD, 
    backuproot=FOLDER)

# https://stackoverflow.com/a/53567149/1418014 thanks!
def formatSize(sizeInBytes, decimalNum=1,
    isUnitWithI=False, sizeUnitSeperator=""):
  """format size to human readable string"""
  # K=kilo, M=mega, G=giga, T=tera, P=peta, 
  # E=exa, Z=zetta, Y=yotta
  sizeUnitList = ['','K','M','G','T','P','E','Z']
  largestUnit = 'Y'

  if isUnitWithI:
    sizeUnitListWithI = []
    for curIdx, eachUnit in enumerate(sizeUnitList):
      unitWithI = eachUnit
      if curIdx >= 1:
        unitWithI += 'i'
      sizeUnitListWithI.append(unitWithI)

    sizeUnitList = sizeUnitListWithI

    largestUnit += 'i'

  suffix = "B"
  decimalFormat = "." + str(decimalNum) + "f" # ".1f"
  finalFormat = ("%" + decimalFormat + 
    sizeUnitSeperator + "%s%s") # "%.1f%s%s"
  sizeNum = sizeInBytes
  for sizeUnit in sizeUnitList:
      if abs(sizeNum) < 1024.0:
        return finalFormat % (sizeNum, sizeUnit, suffix)
      sizeNum /= 1024.0
  return finalFormat % (sizeNum, largestUnit, suffix)

webapp_sizes = {}
for file in b.getBackupFilesList():
    if not file["name"].startswith("Library/WebClips/"):
        continue
    webclip_folder = file["name"].split("/")[2]
    if webclip_folder not in webapp_sizes:
        webapp_sizes[webclip_folder] = {
            "size": 0, "name": None}

    # work out where this file is in the backup
    backup_file_loc = os.path.join(
        FOLDER, UDID, file["backupFile"][:2], 
        file["backupFile"])
    try:
        # technically this is accumulating the encrypted
        # size of the file, not the decrypted. But it's fine
        bf_size = os.stat(backup_file_loc).st_size
        webapp_sizes[webclip_folder]["size"] += bf_size
    except FileNotFoundError:
        continue

    if file["name"].endswith("/ApplicationManifest"):
        # decrypt it to a temp location
        # you should be doing this with tempfile
        dec = b.getFileDecryptedCopy(relativePath=file["name"],
            targetFolder="/tmp", targetName="iosdec")
        with open("/tmp/iosdec", mode="rb") as fp:
            data = fp.read()
            am = plistlib.loads(data)
            # go looking for the first one which looks like JSON
            for item in am["$objects"]:
                if type(item) is not str:
                    continue
                elif item.startswith("https://"):
                    webapp_name = item
                else:
                    try:
                        manifest_json = json.loads(item)
                    except:
                        continue
                    webapp_name = manifest_json.get("name", 
                        manifest_json.get("short_name", "?"))
                webapp_sizes[webclip_folder]["name"] = webapp_name
                break

for v in webapp_sizes.values():
    if v["size"] < 50 or not v["name"]: continue
    print(f"{v['name']}: {formatSize(v['size'])}")

and this helpfully printed a list which looked like this (but longer; I've kept a few around to give you the flavour):

https://elk.zone/: 133.1MB
https://squoosh.app/: 1.1MB
https://www.kryogenix.org/farmbound/: 11.5MB
https://nerdlegame.com/: 220.9MB
https://twitter.com/: 1.2GB
Phanpy: 1.6GB
https://www.nytimes.com/games/: 376.2MB

So... aha. Twitter and Phanpy can, I suppose, be excused since they are presumably caching every post ever, but I can delete those and re-add (and not bother re-adding Twitter) to get some of that back. Wordle, you are the weakest link, goodbye, and also I don't need elk.zone any more now I'm using Phanpy.

I removed a bunch of these. Then I told the iCloud Backup to run again. And now my backup size is 800MB, not 5GB. Hooray!

To be clear, this is not at all a Safari problem. Safari is absolutely doing the right thing here; well done Safari team. Web apps are apps; they should be included in my phone backup, 100%. The bug here is in the iCloud Backup Settings App List, which lists all the apps that are taking up space in the backup but does not list Home Screen web apps. This sucks, and it's a bug, and it should be fixed. Show me PWAs in the backup list, especially ones taking up a gigabyte of space in it. I have filed the bug at feedbackassistant.apple.com although I've never heard back from any of the others I've filed there so I don't really know what the process is.

OK, now off to add Phanpy again.

on September 17, 2024 09:35 PM

Introduction

The Linux Containers project maintains Long Term Support (LTS) releases for its core projects.
Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.

This is now the second round of bugfix releases for LXC, LXCFS and Incus 6.0 LTS.

LXC

LXC is the oldest Linux Containers project and the basis for almost every other one of our projects.
This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.

Announcement: https://discuss.linuxcontainers.org/t/lxc-6-0-2-lts-has-been-released/21632

Highlights of this point release:

  • Reduced log level on some common messages
  • Fix compilation error on aarch64

LXCFS

LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers.
The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.

Announcement: https://discuss.linuxcontainers.org/t/lxcfs-6-0-2-lts-has-been-released/21631

Highlights of this point release:

  • Fix building of LXCFS on musl systems (missing include)

Incus

Incus is our most actively developed project. This virtualization platform is just over a year old but has already seen over 3500 commits by over 120 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.

Announcement: https://discuss.linuxcontainers.org/t/incus-6-0-2-lts-has-been-released/21633

Highlights of this point release:

  • Completion of transition to native OVSDB for OVS/OVN
  • Baseline CPU defintiion for clustered users
  • Filesystem support for io.bus and io.cache
  • CPU flags in server resources
  • Unified image support in incus-simplestreams
  • Completion of libovsdb transition
  • Using a sub-path of a volume as a disk
  • Per storage pool projects limits
  • Isolated OVN networks (no uplink)
  • Per-instance LXCFS
  • Support for environment file at create/launch time
  • Instance auto-restart
  • Column selection in all list commands
  • QMP command hooks and scriptlet
  • Live disk resize support in virtual machines
  • PCI devices hotplug
  • OVN load-balancer health checks
  • Promiscuous mode for OVN NICs
  • Ability to run off IP allocation on OVN NICs
  • Customizable OIDC scope request
  • Configurable LVM PV metadata size
  • Configurable OVS socket path

What’s next?

We’re expecting another LTS bugfix release for the 6.0 branches later this year.

On top of that, Q4 of 2024 will also feature non-LTS feature releases of both LXC and LXCFS as we’re trying to push out new releases of those two projects every 6 months now.

Incus will keep going with its usual monthly feature release cadence.

on September 17, 2024 12:05 PM

September 13, 2024

Parasocial chat

On Linux Matters we have a friendly and active, public Telegram channel linked on our Contact page, along with a Discord Channel. We also have links to Mastodon, Twitter (not that we use it that much) and email.

At the time of writing there are roughly this ⬇️ number of people (plus bots, sockpuppets and duplicates) in or following each Linux Matters “official” presence:

Channel Number
Telegram 796
Discord 683
Mastodon 858
Twitter 9919

Preponderance of chat

We chose to have a presence in lots of places, but primarily the talent presenters (Martin, Mark, and myself (and Joe)) only really hang out to chat on Telegram and Mastodon.

I originally created the Telegram channel on November 20th, 2015, when we were publishing the Ubuntu Podcast (RIP in Peace) A.K.A. Ubuntu UK Podcast. We co-opted and renamed the channel when Linux Matters launched in 2023.

Prior to the channel’s existence, we used the Ubuntu UK Local Community (LoCo) Team IRC channel on Freenode (also, RIP in Peace).

We also re-branded our existing Mastodon accounts from the old Ubuntu Podcast to Linux Matters.

We mostly continue using Telegram and Mastodon as our primary methods of communication because on the whole they’re fast, reliable, stay synced across devices, have the features we enjoy, and at least one of them isn’t run by a weird billionaire.

Other options

We link to a lot of other places at the top of the Linux Matters home page, where our listeners can chat, mostly to eachother and not us.

Being over 16, I’m not a big fan of Discord, and I know Mark doesn’t even have an account there. None of us use Twitter much anymore, either.

Periodically I ponder if we (Linux Matters) should use something other than Telegram. I know some listeners really don’t like the platform, but prefer other places like Signal, Matrix or even IRC. I know for sure some non-listeners don’t like Telegram, but I care less about their opinions.

Part of the problem is that I don’t think any of us really enjoy the other realtime chat alternatives. Both Matrix and Signal have terrible user experience, and other flaws. Which is why you don’t tend to find us hanging out in either of those places.

There are further options I haven’t even considered, like Wire, WhatsApp, and likely more I don’t even know or care about.

So we kept using Telegram over any of the above alternative options.

Pondering Posting Polls

I have repeatedly considered asking the listeners about their preferred chat platforms via our existing channels. But that seems flawed, because we use what we like, and no matter how many people prefer something else, we’re unlikely to move. Unless something strange happens 👀 .

Plus, often times, especially on decentralised platforms, the audience can be somewhat “over-enthusiastic” about their preferred way being The Way™️ over the alternatives. It won’t do us any favours to get data saying 40% report we should use Signal, 40% suggest Matrix and 20% choose XMPP, if the four of us won’t use any of them.

Pursue Podcast Palaver Proposals

So rather than ask our audience, I thought I’d see what other podcasters promote for feedback and chatter on their websites.

I picked a random set from shows I have heard of, and may have listened to, plus a few extra ones I haven’t. None of this is endorsement or approval, I wanted the facts, just the fax, ma’am.

I collated the data in a json file for some reason, then generated the tables below. I don’t know what to do with this information, but it’s a bit of data we may use if we ever decide to move away from Telegram.

Presenting Pint-Sized Payoff

The table shows some nerdy podcasts along with their primary means (as far as I can tell) of community engagement. Data was gathered manually from podcast home pages and “about” pages. I generally didn’t go into the page content for each episode. I made an exception for “Dot Social” and “Linux OTC” because there’s nothing but episodes on their home page.

It doesn’t matter for this research, I just thought it was interesting that some podcasters don’t feel the need to break out their contact details to a separate page, or make it more obvious. Perhaps they feel that listeners are likely to be viewing an episode page, or looking at a specific show metadata, so it’s better putting the contact details there.

I haven’t included YouTube, where many shows publish and discuss, in addition to a podcast feed.

I am also aware that some people exclusively, or perhaps primarily publish on YouTube (or other video platforms). Those aren’t podcasts IMNSHO.

Key to the tables below. Column names have been shorted because it’s a w i d e table. The numbers indicate how many podcasts use that communication platform.

  • EM - Email address (13/18)
  • MA - Mastodon account (9/18)
  • TW - Twitter account (8/18)
  • DS - Discord server (8/18)
  • TG - Telegram channel (4/18)
  • IR - IRC channel (5/18)
  • DW - Discourse website (2/18)
  • SK - Slack channel (3/18)
  • LI - LinkedIn (2/18)
  • WF - Web form (2/18)
  • SG - Signal group (3/18)
  • WA - WhatsApp (1/18)
  • FB - FaceBook (1/18)

Linux

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
Linux Matters
Ask The Hosts
Destination Linux
Linux Dev Time
Linux After Dark
Linux Unplugged
This Week in Linux
Ubuntu Security Podcast
Linux OTC

Open Source Adjunct

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
2.5 Admins
Bad Voltage
Coffee and Open Source
Dot Social
Open Source Security
localfirst.fm

Other Tech

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
ATP
BBC Newscast
The Rest is Entertainment

Point

Not entirely sure what to do with this data. But there it is.

Is Linux Matters going to move away from Telegram to something else? No idea.

on September 13, 2024 04:00 PM

September 12, 2024

git revert name and Akademy

Jonathan Riddell

I reverted my name back to Jonathan Riddell and have now made a new uid for my PGP key, you can get the updated one on keyserver.ubuntu.com or my contact page or my Launchpad page.

Here’s some pics from Akademy

on September 12, 2024 02:33 PM

September 11, 2024

Incus is a manager for virtual machines, system containers and application containers. Get Incus support here.

When you initially setup Incus, you create a storage pool where Incus will put in there everything. There are several options for storage pools, in this post we focus on ZFS storage pools, and those specifically that are stored on a separate block device (like /dev/sdb).

We are dealing with two cases. One, your installation of Incus has been somehow removed but the storage pool is somewhere there intact and you want to recover by installing again Incus. Two, you want to move the disk with storage pool from one computer to another, like reconnecting the storage pool on a new server.

This type of task is quite risky if you have a lot of important data on your system. Obviously, prior to you actually doing this on an actual system, you should take backups with incus export of your most important instances. And then, you should perform this tutorial several times so that you get the gist of recovering Incus installations. This tutorial shows you how to do a dry run of creating an Incus installation, killing it off, and then miraculously recovering it.

Prerequisites

You should have a running Incus installation.

Setting up Incus, using a block storage volume

We launch an Incus virtual machine (VM) that will act as our Incus server. We then (on the host) create a storage volume of type block. Next, we attach that block storage volume to the VM. In the VM it can be found as /dev/sdb. Subsequently, we incus admin init to initialize Incus, and configure Incus to use the block device /dev/sdb when creating the storage pool. When we run incus admin init, we press Enter when we want to accept the default value.

$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume create default IncusStorage --type=block size=6GiB
Storage volume IncusStorage created
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
root@incusserver:~# fdisk -l /dev/sdb
Disk /dev/sdb: 6 GiB, 6442450944 bytes, 12582912 sectors
Disk model: QEMU HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@incusserver:~# sudo apt install -y incus zfsutils-linux
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes
Path to the existing block device: /dev/sdb
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    source: /dev/sdb
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

root@incusserver:~#

Next we populate the Incus installation with a few alpines. We do this because we want to see these containers again after we recover the storage pool.

root@incusserver:~# incus launch images:alpine/edge alpine1
Launching alpine1
root@incusserver:~# incus launch images:alpine/edge alpine2
Launching alpine2
root@incusserver:~# incus launch images:alpine/edge alpine3
Launching alpine3
root@incusserver:~#

This is where the interesting stuff start. We now want to shutdown the Incus server and remove it. However, the block storage volume will still be there and in good condition, as the server has been shutdown cleanly. Note that the block storage volumes should only be attached to one system at a time.

root@incusserver:~# shutdown -h now
root@incusserver:~# Error: websocket: close 1006 (abnormal closure): unexpected EOF
$ incus storage volume show default IncusStorage
config:
  size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by:
- /1.0/instances/incusserver
location: none
content_type: block
project: default
created_at: ...
$ incus delete incusserver
$ incus storage volume show default IncusStorage
config:
  size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by: []
location: none
content_type: block
project: default
created_at: ...
$

Next, we launch a new VM that will be used as a new Incus server, then attach back the block storage volume with incus storage volume attach and install Incus along with the necessary ZFS client utils.

$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
Error: Instance is not running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
root@incusserver:~# apt install -y zfsutils-linux incus
...
root@incusserver:~#

Finally, we bring back the old installation data with those three alpines. We run zpool import, which is a ZFS command that will look for potential ZFS pools and list them by name. The command zpool import default is the one that does the actual import. The ZFS pool name default was the name that was given by Incus before, when we were initializing Incus. Subsequently, we run incus admin recover to recover the ZFS pool and reconnect it with this new installation of Incus.

root@incusserver:~# zfs list
no datasets available
root@incusserver:~# zpool list
no pools available
root@incusserver:~# zpool import
   pool: default
     id: 8311839500301555365
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	default     ONLINE
	  sdb       ONLINE
root@incusserver:~# zpool import default
root@incusserver:~# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  5.50G  6.80M  5.49G        -         -     0%     0%  1.00x    ONLINE  -
root@incusserver:~# 
root@incusserver:~# incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/sdb
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="/dev/sdb")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "default" of type "zfs"
The following unknown volumes have been found:
 - Container "alpine2" on pool "default" in project "default" (includes 0 snapshots)
 - Container "alpine3" on pool "default" in project "default" (includes 0 snapshots)
 - Container "alpine1" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
root@incusserver:~# incus list
+---------+---------+------+------+-----------+-----------+
|  NAME   |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+---------+---------+------+------+-----------+-----------+
| alpine1 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
| alpine2 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
| alpine3 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
root@incusserver:~#

Those alpines are in a STOPPED state. Will they start? Sure they will.

root@incusserver:~# incus start alpine1 alpine2 alpine3
root@incusserver:~# incus list -c ns4t
+---------+---------+----------------------+-----------+
|  NAME   |  STATE  |         IPV4         |   TYPE    |
+---------+---------+----------------------+-----------+
| alpine1 | RUNNING | 10.36.146.69 (eth0)  | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine2 | RUNNING | 10.36.146.101 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine3 | RUNNING | 10.36.146.248 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
root@incusserver:~#

In this tutorial we saw how to recover an Incus installation, while the storage pool is intact. We covered the case that the storage pool is ZFS on a block device.

on September 11, 2024 02:05 PM

September 10, 2024

OpenUK Awards 2024

Jonathan Riddell

https://openuk.uk/openuk-september-2024-newsletter-1/

https://www.linkedin.com/feed/update/urn:li:activity:7238138962253344769/

Our 5th annual Awards are open for nominations and our 2024 judges are waiting for your nominations! Hannah Foxwell, Jonathan Riddell, and Nicole Tandy will be selecting winners for 12 categories. ?

The OpenUK Awards 2024 are open for nominations until Sunday, September 15.. Our 5th Awards again celebrate the UK’s leadership and global collaboration in open technology!

Nominate now! https://openuk.uk/awards/openuk-awards-2024/

Up to 3 shortlisted nominees will be selected in each category by early October and each nominee will be given one place at the Oscars of Open Source, the black tie Awards Ceremony and Gala Dinner for our 5th Awards held at the House of Lords on 28 November, thanks to the sponsorship of Lord Wei.

on September 10, 2024 02:28 PM

Announcing Incus 6.5

Stéphane Graber

This release contains a very good mix of bug fixes and performances improvements as well as exciting new features across the board!

The highlights for this release are:

  • Instance auto-restart
  • Column selection in all list commands
  • QMP command hooks and scriptlet
  • Live disk resize for VMs
  • PCI devices hotplug for VMs
  • OVN load-balancer health checks
  • OVN Interconnect ECMP support
  • OVN NICs promiscuous mode
  • OVN NICs disabling of IP allocation
  • Configurable LVM PV metadata size
  • Configurable OVS socket path

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on September 10, 2024 07:00 AM

September 06, 2024

This is mostly an informational PSA for anyone struggling to get Windows 3.11 working in modern versions of QEMU. Yeah, I know, not exactly a massively viral target audience.

Anyway, short answer, use QEMU 5.2.0 from December 2020 to run Windows 3.11 from November 1993.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

An innocent beginning

I made a harmless jokey reply to a toot from Thom at OSNews, lamenting the lack of native Mastodon client for Windows 3.11.

When I saw Thom’s toot, I couldn’t resist, and booted a Windows 3.11 VM that I’d installed six weeks ago, manually from floppy disk images of MSDOS and Windows.

I already had Lotus Organiser installed to post a little bit of nostalgia-farming on threads - it’s what they do over there.

Post by @popey
View on Threads

I thought it might be fun to post a jokey diary entry. I hurriedly made my silly post five minutes after Thom’s toot, expecting not to think about this again.

Incorrect, brain

I shut the VM down, then went to get coffee, chuckling to my smart, smug self about my successful nerdy rapid-response. While the kettle boiled, I started pondering - “Wait, if I really did want to make a Mastodon client for Windows 3.11, how would I do it?

I pondered and dismissed numerous shortcuts, including, but not limited to:

  • Fake it with screenshots doctored in MS Paint
  • Run an existing DOS Mastodon Client in a Window
  • Use the Windows Telnet client to connect insecurely to my laptop running the Linux command-line Mastodon client, Toot
  • Set up a proxy through which I could get to a Mastodon web page

I pondered a different way, in which I’d build a very simple proof of concept native Windows client, and leverage the Mastodon API. I’m not proficient in (m)any programming languages, but felt something like Turbo Pascal was time-appropriate and roughly within my capabilities.

Diversion

My mind settled on Borland Delphi, which I’d never used, but looked similar enough for a silly project to Borland Turbo Pascal 7.0 for DOS, which I had. So I set about installing Borland Delphi 1.0 from fifteen (virtual) floppy disks, onto my Windows 3.11 “Workstation” VM.

Windows 3.11, with a Borland Delphi window open

Thank you, whoever added the change floppy0 option to the QEMU Monitor. That saved a lot of time, and was reduced down to a loop of this fourteen times:

"Please insert disk 2"
CTRL+ALT+2
(qemu) change floppy 0 Disk02.img
CTRL+ALT+1
[ENTER]

During my research for this blog, I found a delightful, nearly decade-old video of David Intersimone (“David I”) running Borland Delphi 1 on Windows 3.11. David makes it all look so easy. Watch this to get a moving-pictures-with-sound idea of what I was looking at in my VM.

Once Delphi was installed, I started pondering the network design. But that thought wasn’t resident in my head for long, because it was immediately replaced with the reason why I didn’t use that Windows 3.11 VM much beyond the original base install.

The networking stack doesn’t work. Or at least, it didn’t.

That could be a problem.

Retro spelunking

I originally installed the VM by following this guide, which is notable as having additional flourishes like mouse, sound, and SVGA support, as well as TCP/IP networking. Unfortunately I couldn’t initially get the network stack working as Windows 3.11 would hang on a black screen after the familiar OS splash image.

Looking back to my silly joke, those 16-bit Windows-based Mastodon dreams quickly turned to dust when I realised I wouldn’t get far without an IP address in the VM.

Hopes raised

After some digging in the depths of retro forums, I stumbled on a four year-old repo maintained by Jaap Joris Vens.

Here’s a fully configured Windows 3.11 machine with a working internet connection and a load of software, games, and of course Microsoft BOB 🤓

Jaap Joris published this ready-to-go Windows 3.11 hard disk image for QEMU, chock full of games, utilities, and drivers. I thought that perhaps their image was configured differently, and thus worked.

However, after downloading it, I got the same “black screen after splash” as with my image. Other retro enthusiasts had the same issue, and reported the details on this issue, about a year ago.

does not work, black screen.

It works for me and many others. Have you followed the instructions? At which point do you see the black screen?

The key to finding the solution was a comment from Jaap Joris pointing out that the disk image “hasn’t changed since it was first committed 3 years ago”, implying it must have worked back then, but doesn’t now.

Joy of Open Source

I figured that if the original uploader had at least some success when the image was created and uploaded, it is indeed likely QEMU or some other component it uses may have (been) broken in the meantime.

So I went rummaging in the source archives, looking for the most recent release of QEMU, immediately prior to the upload. QEMU 5.2.0 looked like a good candidate, dated 8th December 2020, a solid month before 18th January 2021 when the hda.img file was uploaded.

If you build it, they will run

It didn’t take long to compile QEMU 5.2.0 on my ThinkPad Z13 running Ubuntu 24.04.1. It went something like this. I presumed that getting the build dependencies for whatever is the current QEMU version, in the Ubuntu repo today, will get me most of the requirements.

$ sudo apt-get build-dep qemu
$ mkdir qemu
$ cd qemu
$ wget https://download.qemu.org/qemu-5.2.0.tar.xz
$ tar xvf qemu-5.2.0.tar.xz
$ cd qemu-5.2.0
$ ./configure
$ make -j$(nproc)

That was pretty much it. The build ran for a while, and out popped binaries and the other stuff you need to emulate an old OS. I copied the bits required directly to where I already had put Jaap Joris’ hda.img and start script.

$ cd build
$ cp qemu-system-i386 efi-rtl8139.rom efi-e1000.rom efi-ne2k_pci.rom kvmvapic.bin vgabios-cirrus.bin vgabios-stdvga.bin vgabios-vmware.bin bios-256k.bin ~/VMs/windows-3.1/

I then tweaked the start script to launch the local home-compiled qemu-system-i386 binary, rather than the one in the path, supplied by the distro:

$ cat start
#!/bin/bash
./qemu-system-i386 -nic user,ipv6=off,model=ne2k_pci -drive format=raw,file=hda.img -vga cirrus -device sb16 -display gtk,zoom-to-fit=on

This worked a treat. You can probably make out in the screenshot below, that I’m using Internet Explorer 5 to visit the GitHub issue which kinda renders when proxied via FrogFind by Action Retro.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

Share…

I briefly toyed with the idea of building a deb of this version of QEMU for a few modern Ubuntu releases, and throwing that in a Launchpad PPA then realised I’d need to make sure the name doesn’t collide with the packaged QEMU in Ubuntu.

I honestly couldn’t be bothered to go through the pain of effectively renaming (forking) QEMU to something like OLDQEMU so as not to damage existing installs. I’m sure someone could do it if they tried, but I suspect it’s quite a search and replace, or move the binaries somewhere under /opt. Too much effort for my brain.

I then started building a snap of qemu as oldqemu - which wouldn’t require any “real” forking or renaming. The snap could be called oldqemu but still contain qemu-system-i386 which wouldn’t clash with any existing binaries of the same name as they’d be self-contained inside the compressed snap, and would be launched as oldqemu.qemu-system-i386.

That would make for one package to maintain rather than one per release of Ubuntu. (Which is, as I am sure everyone is aware, one of the primary advantages of making snaps instead of debs in the first place.)

Anyway, I got stuck with another technical challenge in the time I allowed myself to make the oldqemu snap. I might re-visit it, especially as I could leverage the Launchpad Build farm to make multiple architecture builds for me to share.

…or not

In the meantime, the instructions are above, and also (roughly) in the comment I left on the issue, which has kindly been re-opened.

Now, about that Windows 3.11 Mastodon client…

on September 06, 2024 01:40 PM

September 05, 2024

uCareSystem has had the ability to detect packages that were uninstalled and then remove their config files. Now it uses a better way that detects more. Also with this release, there are fixes and enhancements that make it even more useful. First of all, Its the Olympics… you saw the app icon that was change […]
on September 05, 2024 09:09 PM

September 02, 2024

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

on September 02, 2024 01:01 PM

September 01, 2024

All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)

You can also support my work directly via Liberapay.

man-db and friends

I released libpipeline 1.5.8 and man-db 2.13.0.

Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.

OpenSSH

I fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.

Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.

Multiple identical results from getaddrinfo

I expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:

$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts
127.0.0.1       localhost
127.0.1.1       [...]
127.0.0.1       localhost ip6-localhost ip6-loopback

The last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.

I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).

OpenSSL vs. cryptography

I tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.

Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.

After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.

GCC 14 regressions

Continuing from last month, I fixed build failures in pccts (NMU) and trn4.

Python team

I upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.

A few of these required their own bits of yak-shaving:

I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).

I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.

Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.

I made python-nacl build reproducibly (upstream PR).

I moved aliased files from / to /usr in timekpr-next (#1073722).

Installer team

I applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).

on September 01, 2024 01:29 PM

Plesk high swap usage

Dougie Richardson

Seen warnings about high swap consumption in Plesk on Ubuntu 20.04.6 LTS:

Had a look in top and noticed clamavd using 1.0G of swap. After a little digging around, it might be related to a change in ClamAV 0.103.0 where non-blocking signature database reloads were introduced.

Major changes

  • clamd can now reload the signature database without blocking scanning. This multi-threaded database reload improvement was made possible thanks to a community effort.
    • Non-blocking database reloads are now the default behavior. Some systems that are more constrained on RAM may need to disable non-blocking reloads, as it will temporarily consume double the amount of memory. We added a new clamd config option ConcurrentDatabaseReload, which may be set to no.

I disabled the option and the difference is dramatic:

I’ll keep an eye on it I guess.

on September 01, 2024 09:11 AM

August 31, 2024

Thanks to all the hard work from our contributors, Lubuntu 24.04.1 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 24.04 LTS will be supported for 3 years until April 2027. Our […]
on August 31, 2024 12:17 PM

August 29, 2024

Upgrades from 22.04 LTS also enabled!

The Ubuntu Studio team is pleased to announce the first service release of Ubuntu Studio 24.04 LTS, 24.04.1. This also marks the opening of upgrades from Ubuntu Studio 22.04 LTS to 24.04 LTS.

If you are running Ubuntu Studio 22.04, you should be receiving an upgrade notification in a matter of days upon login.

Notable Bugs Fixed Specific to Ubuntu Studio:

  • Fixed an issue where PipeWire could not send long SysEx messages bridging to some MIDI controllers .
  • DisplayCal would not launch as it required an older version of Python than Python 3.12 that was released with Ubuntu 24.04. This was fixed.
  • The new installer doesn’t configure users to be part of the audio group by default. However, upon first login, the user that just logged-in is automatically configured, but this requires the system to be completely restarted to take effect. The fix to make this seamless is in progress.

Other bugfixes are in progress and/or fixed and can be found in the Ubuntu release notes or the Kubuntu release notes for the desktop environment.

How to get Ubuntu Studio 24.04.1 LTS

Ubuntu Studio 24.04.1 LTS can be found on our download page.

Upgrading to Ubuntu Studio 24.04.1 LTS

If you are running Ubuntu Studio 24.04 LTS, you arleady have it.

If you are running Ubuntu Studio 22.04 LTS, wait for a notification in your system tray. Otherwise, see the instructions in the release notes.

Contributing and Donating

Right now we mostly need financial contributions and donations. As stated before, our project leader’s family is in a time of lead with his wife losing her job unexpectedly. We would like to keep the project running and be able to give above and beyond to help them.

Therefore, if you find Ubuntu Studio useful and can find it in your heart to give what you think it’s worth and then some, please do give.

Ways to donate can be found in the sidebar as well as at ubuntustudio.org/contribute.

on August 29, 2024 09:17 PM

Around a decade ago, I was happy to learn about bcache – a Linux block cache system that implements tiered storage (like a pool of hard disks with SSDs for cache) on Linux. At that stage, ZFS on Linux was nowhere close to where it is today, so any progress on gaining more ZFS features in general Linux systems was very welcome. These days we care a bit less about tiered storage, since any cost benefit in using anything else than nvme tends to quickly evaporate compared to time you eventually lose on it.

In 2015, it was announced that bcache would grow into its own filesystem. This was particularly exciting and it caused quite a buzz in the Linux community, because it brought along with it more features that compare with ZFS (and also btrfs), including built-in compression, built-in encryption, check-summing and RAID implementations.

Unlike ZFS, it didn’t have a dkms module, so if you wanted to test bcachefs back then, you’d have to pull the entire upstream bcachefs kernel source tree and compile it. Not ideal, but for a promise of a new, shiny, full-featured filesystem, it was worth it.

In 2019, it seemed that the time has come for bcachefs to be merged into Linux, so I thought that it’s about time we have the userspace tools (bcachefs-tools) packaged in Debian. Even if the Debian kernel wouldn’t have it yet by the time the bullseye (Debian 11) release happened, it might still have been useful for a future backported kernel or users who roll their own.

By total coincidence, the first git snapshot that I got into Debian (version 0.1+git20190829.aa2a42b) was committed exactly 5 years ago today.

It was quite easy to package it, since it was written in C and shipped with a makefile that just worked, and it made it past NEW into unstable in 19 January 2020, just as I was about to head off to FOSDEM as the pandemic started, but that’s of course a whole other story.

Fast-forwarding towards the end of 2023, version 1.2 shipped with some utilities written in Rust, this caused a little delay, since I wasn’t at all familiar with Rust packaging yet, so I shipped an update that didn’t yet include those utilities, and saw this as an opportunity to learn more about how the Rust eco-system worked and Rust in Debian.

So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.

I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).

With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream.

As it stands now, bcachefs-tools is impossible to maintain in Debian stable. While my primary concerns when packaging, are for Debian unstable and the next stable release, I also keep in mind people who have to support these packages long after I stopped caring about them (like Freexian who does LTS support for Debian or Canonical who has long-term Ubuntu support, and probably other organisations that I’ve never even heard of yet). And of course, if bcachfs-tools don’t have any usable stable releases, it doesn’t have any LTS releases either, so anyone who needs to support bcachefs-tools long-term has to carry the support burden on their own, and if they bundle it’s dependencies, then those as well.

I’ll admit that I don’t have any solution for fixing this. I suppose if I were upstream I might look into the possibility of at least supporting a larger range of recent dependencies (usually easy enough if you don’t hop onto the newest features right away) so that distributions with stable releases only need to concern themselves with providing some minimum recent versions, but even if that could work, the upstream author is 100% against any solution other than vendoring all its dependencies with the utility and insisting that it must only be built using these bundled dependencies. I’ve made 6 uploads for this package so far this year, but still I constantly get complaints that it’s out of date and that it’s ancient. If a piece of software is considered so old that it’s useless by the time it’s been published for two or three months, then there’s no way it can survive even a usual stable release cycle, nevermind any kind of long-term support.

With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done. I made an upload to experimental so that it’s still available if someone wants to work on it (without having to go through NEW again), it’s been removed from unstable so that it doesn’t migrate to testing, and the ancient (especially by bcachefs-tools standards) versions that are in stable and oldstable will be removed too, since they are very likely to cause damage with any recent kernel versions that support bcachefs.

And so, my adventure with bcachefs-tools comes to an end. I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it.

on August 29, 2024 01:04 PM

August 21, 2024

Here's the tl;dr: if you make web apps in or for the UK, the CMA, the UK tech regulator, want to hear from you about their proposals before August 29th 2024, which is only a week away. Read their list of remedies to anticompetitive behaviour between web browsers and platforms, and email your thoughts to [email protected] before the deadline. They really do want to hear from you, confidentally if you want, and your voice is useful here; you don't need to have some formally written legal opinion here. They want to hear from actual web devs and companies. Email them.

We want to hear from you -- Competition and Markets Authority

Now let's look at what the CMA have written in a little more detail. (This is the "tl" bit, although hopefully you will choose to "r".) They have been conducting a "Market Investigation Reference", which is regulator code for "talk to loads of people to see if there's a problem and then decide what to do about that", and the one we care about is about web browsers. I have, as part of Open Web Advocacy, been part of those consultations a number of times, and they've always been very willing to listen, and they do seem to identify a bunch of problems with browser diversity that I personally also think are problems. You know what we're talking about here: all browsers on iOS are required to be Safari's WebKit and not their own engine; Google have a tight grip on a whole bunch of stuff; browser diversity is a good thing and there's not enough of it in the world and this looks to be from deliberate attempts to act like monopolies by some players. These are the sorts of issues that CMA are concerned about (and they have published quite a few working papers explaining their issues in detail which you can read). What we're looking at today is their proposed list of fixes for these problems, which they call "remedies". At OWA we have also done this, of course, and you should read the OWA blog post about the CMA's remedies paper. But the first important point is, to be clear, that a whole bunch of these remedies being proposed by the CMA are good. This is not a complaint that it's all bad or that it's toothless, not at all. They're going to stop the #AppleBrowserBan and require that other browsers are allowed to use their own engines on iOS as browser apps and in in-app browsing, they're going to require both Apple and Google to grant other browsers' access to the same APIs that their own browsers can get at, they've got suggestions in place for how users can choose which browser they use to get past the problem of the "default hotseat" where you get one browser when you buy a phone and never think to change it, they're suggesting that Google open access to WebAPK minting to everyone. All of these help demonopolise the UK market. This is all good.

Stuart Langridge, Bruce Lawson, and Alex Moore of OWA in the CMA offices in London

But there are some places where their remedies don't really go far enough, and this is the stuff where it would be good for you, my web-engaged reader, to send them an email with your thoughts one way or the other. Part of making the web as capable as a platform-specific app is that web sites can be presented like a platform-specific app. This is (part of) what a PWA is, which most of you reading this will already know about. But releasing your app as a PWA, while it has a bunch of good characteristics for you (no reviews needed, instant updates, full control, cross-platform development, no tithing of money required to the people who sold the phone it's on) also has some downsides. In particular, it's difficult to get people to "install" a PWA, especially on iOS where you have to tell your users to go digging around in the share menu. And this is a fairly big area where the CMA could have proposed remedies, and have so far not chosen to. The first problem here is that iOS Safari doesn't support any sort of install prompt: as mentioned, there's the "add to home screen" entry hidden in the share menu. There's an API for this, everyone else implements it, iOS Safari doesn't. Maybe the API's got problems and needs fixing? That seems fine; engage with the web standards process to get it fixed! But there's been no sign of doing that in any useful way.

The second and related issue is that although the CMA's remedies state that browsers can use their own engine rather than having to be mere wrappers around the platform's web view, they do not say that when a browser installs a web app, that that web app will also use that browser's engine. That is: if there were a port of, say, Microsoft Edge to iOS, then Edge would be able to use its own engine, which is Microsoft's port of Blink. That Edge browser can implement install prompts how it likes, because it's using its own engine. But there's no guarantee in the CMA remedies that the PWA that gets installed will then be opened up in Edge. Calling the "install this PWA as an app" API provided by the platform might add it as a PWA in the platform maker's browser -- iOS Safari in this example. This would be rubbish. It means that the installed app might not even work; how will it know your passwords or cookies, etc; this can't be what's intended. But the remedies do not explicitly state this requirement, and so it's quite possible that platform owners will therefore use this as another way to push back against PWAs to make them less of a competitor to their own app stores. I would like to be able to say that platform owners wouldn't do that, that they wouldn't deliberately make things worse in an effort at malicious compliance, but after the debacle earlier this year of Apple dropping PWA support entirely and then only backing off on that after public outcry, we can't assume that there will be a good-faith attempt to improve PWA support (either by implementation, or by engaging wholeheartedly with the public web standards process), and so the remedies need to spell this out in more detail. This should be easy enough if I'm right and the CMA's intent is that this should be done, and your voice adding to that is likely to encourage them.

A tweet from Ada Rose Cannon reading 'Seeing a Web App I worked on used by *Apple* to justify that the Web is a viable platform on iOS is bullshit. The Web can be an ideal place to build apps but Apple is consistently dragging their heals on implementing the Web APIs that would allow them to compete with native apps', quoting a tweet by Peter Gasston with text 'This image from Apple‘s opening presentation in the Epic Games court case is very misleading. “Web Apps and Native Apps can look the same, therefore no-one needs to publish on the App Store”.' and an Apple-created image of the FT web app and FT platform-specific app looking similar

The worry about malicious compliance hampering web apps being a proper competitor to platform-specific apps also extends to another thing missing in the remedies: that access to hardware and software platform APIs for other browsers isn't required to be "which APIs there are", but "which APIs the existing browser elects to use". That is: if you write a native platform-specific app, it can talk to various hardware-ish things; bluetooth, USB, NFC, whichever. Therefore, you ought to be able, if you're a browser, to also have those APIs, in enough detail that you can then offer (mediated, secure) access to those services to your users, the PWAs and websites that people run with you. But the remedies do not ensure that this is the case; they ensure that there is a "requirement for Apple to grant equivalent access to APIs used by WebKit and Safari to browsers using alternative browser engines." What this means is that if Safari doesn't use a thing, no other browser can use it either. So it's not possible to make the browser you build better than Safari at this; Apple get to set the ceiling of what the web can do on the platform, and the ceiling is defined as "whatever their browser can do". That's... not very competitive, is it? No. If you agree with that, then you should also write to the CMA about it. They would like to hear about actual examples where this sort of thing harms UK businesses, of course, and if that's you definitely write in, but it's also worth giving your opinion if you are a UK developer who works in this area, delivering things via the web to your users (or if you want to do that but are currently prevented).

OK. Discussion over: go and write to the CMA with your thoughts on their remedies. Even if you think what they've proposed is perfect, you should still send them a message saying that; one thing that they and all government agencies tend to bemoan is that they only hear from people with lots of skin in the game, and generally only from people who are opposed, not people who are supportive. That means that they get a skewed view of what the web developer community actually think, and this is a chance for us to unskew that a bit, together. You can request that the CMA keep your name, business, or submission confidential, so you don't have to worry about giving away secrets or your participation, and you need only comment on stuff which is relevant to you; you do not need a comprehensive position paper on the whole thing! The address to email is [email protected], the list of remedies is Working Paper 7, and the deadline is Thursday 29th August.

State of the Browser 2024

If you want to hear more about this, then I am speaking about OWA, how it happened, what we've done, and how you can be involved at State of the Browser 2024 on Saturday 14th September (just under a month from now!) in the Barbican in London. I'm told that there are less than 30 in-person tickets left, although there are online streaming tickets available too, so get in quick if you want to hear me and a whole bunch of other speakers!

(Late breaking update: Bruce has also written about this and you should read that too!)

on August 21, 2024 08:07 AM

August 14, 2024

Netplan v1.1 released

Lukas Märdian

I’m happy to announce that Netplan version 1.1 is now available on GitHub and is soon to be deployed into a Debian and/or Ubuntu installation near you! Six months and 120 commits after the previous version (including one patch release v1.0.1), this release is brought to you by 17 free software contributors from around the globe. 🚀

Kudos to everybody involved! ❤

Highlights

  • Custom systemd-networkd-wait-online logic override to wait for link-local and routable interfaces. (#456#482)
  • Modification of the embedded-switch-mode setting without virtual-function (VF) definitions on SR-IOV devices (#454)
  • Parser flag to ignore individual, broken configurations, instead of not generating any backend configuration (#412)
  • Fixes for @ProtonVPN (#495) and @microsoft Azure Linux (#445), contributed by those companies

Releasing v1.1

Documentation

Bug fixes

New Contributors

Full Changelog1.0…1.1

on August 14, 2024 01:41 PM

August 12, 2024

Another loss last week of a friend. I am staying strong and working through it. A big thank you to all of you that have donated to my car fund, I still have a long way to go. I am not above getting a cheap old car, but we live in sand dunes so it must be a cheap old car with 4×4 to get to my property. A vehicle is necessary as we are 50 miles away from staples such as food and water. We also have 2 funerals to attend. Please consider a donation if my work is useful to you. https://gofund.me/1e784e74 All of my work is currently unpaid work, as I am between contracts. Thank you for your consideration. Now onto the good stuff, last weeks work. It was another very busy week with Qt6 packaging in Debian/Kubuntu and KDE snaps. I also have many SRUs for Kubuntu Noble .1 release that needs their verification done.

Kubuntu:

Debian:

Starting the salvage process for kdsoap which is blocking a long line of packages, notably kio-extras.

  • qtmpv – in NEW
  • arianna – in NEW
  • xwaylandvideobridge – NEW
  • futuresql – NEW
  • kpat WIP – failing tests
  • kdegraphics-thumbnailers (WIP)
  • khelpcenter – experimental
  • kde-inotify-survey – experimental
  • ffmpegthumbs – experimental
  • kdialog – experimental
  • kwalletmanager – experimental
  • libkdegames – pushed some fixes – experimental
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs – kio-extras (blocked)

KDE Snaps:

Please note: Please help test the –edge snaps so I can promote them to stable.

WIP Snaps or MR’s made

  • Kirigami-gallery ( building )
  • Kiriki (building)
  • Kiten (building)
  • kjournald (Building)
  • Kdevelop (WIP)
  • Kdenlive (building)
  • KHangman (WIP)
  • Kubrick (WIP)
  • Palapeli (Manual review in store dbus)
  • Kanagram (WIP)
  • Labplot (WIP)
  • Kjumpingcube (MR)
  • Klettres (MR)
  • Kajongg –edge (Broken, problem with pyqt)
  • Dragon –edge ( Broken, dbus fails)
  • Ghostwriter –edge ( Broken, need to workout Qt webengine obscure way of handling hunspell dictionaries.)
  • Kasts –edge ( Broken, portal failure, testing some plugs)
  • Kbackup –edge ( Needs auto-connect udisks2, added home plug)
  • Kdebugsettings –edge ( Added missing personal-files plug, will need approval)
  • KDiamond –edge ( sound issues )
  • Angelfish –edge https://snapcraft.io/angelfish ( Crashes on first run, but runs fine after that.. looking into it)
  • Qrca –edge ( needs snap connect qrca:camera camera until auto-connect approved, will remain in –edge until official release)

Thanks for stopping by.

on August 12, 2024 04:33 PM

August 11, 2024

There are a lot of privileges most of us probably take for granted. Not everyone is gifted with the ability to do basic things like talk, walk, see, and hear. Those of us (like myself) who can do all of these things don’t really think about them much. Those of us who can’t, have to think about it a lot because our world is largely not designed for them. Modern-day things are designed for a fully-functional human being, and then have stuff tacked onto them to make them easier to use. Not easy, just “not quite totally impossible.”

Issues of accessibility plague much of modern-day society, but I want to focus on one pain-point in particular. Visually-impaired accessibility.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Now I’m not blind, so I am not qualified to say how exactly a blind person would use their computer. But I have briefly tried using a computer with my monitor turned off to test visually-impaired accessibility, so I know a bit about how it works. The basic idea seems to be that you launch a screen reader using a keyboard shortcut. That screen reader proceeds to try to describe various GUI elements to you at a rapid speed, from which you have to divine the right combination of Tab, Space, Enter, and arrow keys to get to the various parts of the application you want to use. Using these arcane sequences of keys, you can make the program do… something. Hopefully it’s what you wanted, but based on reports I’ve seen from blind users, oftentimes the computer reacts to your commands in highly unexpected ways.

The first thing here that jumps out to most people is probably the fact that using a computer blind is like trying to use magic. That’s a problem, but that’s not what I’m focusing on. I'm focusing on two words in particular.

Screen. Reader.

Wha…?

I want you to stop and take a moment to imagine the following scenario. You want to go to a concert, but can’t, so you sent your friend to the concert in your place and ask them to record it for you. They do so, and come back with a video of the whole thing. They’ve transcribed every word of each song, and made music sheets detailing every note and chord the concert played. There’s even some cool colors and visualizer stuff that conveys the feeling and tempo of each song. They then proceed to lay all this glorious work out in front of you, claiming it conveys everything about the concert perfectly. Confronted with this onslaught of visual data, what’s the first thing you’re going to ask?

“Didn’t you record the audio?”

Of course that’s the first thing you’re going to ask, because it’s a concert for crying out loud, 90% of the point of it is the audio. I can listen to a simple, relatively low-quality recording of a concert’s audio and be satisfied. I get to hear the emotion, the artistry, everything. I don’t need a single pixel of images to let me experience it in a satisfactory way. On the other hand, I don’t care how detailed your video analysis of the audio is - if it doesn’t include the audio, I’m going to be upset. Potentially very upset.

Now let’s go back to the topic at hand, visually-impaired accessibility. What does a screen reader do? It takes a user interface, one designed for seeing users, and tries to describe it as best it can to the user via audio. You then have to use keyboard shortcuts to navigate the UI, which the screen reader continues to describe bits of as you move around. For someone who’s looking at the app, this is all fine and dandy. For someone who can kinda see, maybe it’s sufficient. But for someone who’s blind or severely visually impaired, this is difficult to use if you’re lucky. Chances are you’re not going to be lucky and the app your working with might as well not exist.

Why is this so hard? Why have decades of computer development not led to breakthroughs in accessibility for blind people? Because we’re doing the whole thing wrong! We’re taking a user interface designed specifically and explicitly for seeing users, and trying to convey it over audio! It’s as ridiculous as trying to convey a concert over video. A user who’s listening to their computer shouldn’t need to know how an app is visually laid out in order to figure out whether they need to press up arrow, right arrow, or Tab to get to their desired UI element. They shouldn’t have to think in terms of buttons and check boxes. These are inherently visual user interface elements. Forcing a blind person to use these is tantamount to torture.

On top of all of this, half the time screen readers don’t even work! People who design software are usually able to see. You just don’t think about how to make software usable for blind people when you can see. It’s not something that easily crosses your mind. But try turning your screen off and navigating your system with a screen reader, and suddenly you’ll understand what’s lacking about the accessibility features. I tried doing this once, and I went and turned the screen back on after about five minutes of futile keyboard bashing. I can’t imagine the frustration I would have experienced if I had literally no other option than to work with a screen reader. Add on top of that the possibility that the user of your app has never even seen a GUI element in their lives before because they can’t see at all, and now you have essentially a language barrier in the way too.

So what’s the solution to this? Better screen reader compatibility might be helpful, but I don’t think that’s ultimately the correct solution here. I think we need to collectively recognize that blind people shouldn’t have to work with graphical user interfaces, and design something totally new.

One of the advantages of Linux is that it’s just a bunch of components that work together to provide a coherent and usable set of features for working on your computer. You aren’t locked into using a UI that you don’t like - just use or create some other UI. All current desktop environments are based around a screen that the user can see, but there’s no rules that say it has to be that way. Imagine if instead, your computer just talked to you, telling you what app you were using, what keys to press to accomplish certain actions, etc. In response, you talked back to it using the keyboard or voice recognition. There would be no buttons, check boxes, menus, or other graphical elements - instead you’d have actions, options, feature lists, and other conceptual elements that can be conveyed over audio. Switching between UI elements with the keyboard would be intuitive, predictable, and simple, since the app would be designed from step one to work that way. Such an audio-centric user interface would be easy for a blind or vision-impaired person to use. If well-designed, it could even be pleasant. A seeing person might have a learning curve to get past, but it would be usable enough for them too. Taking things a step further, support for Braille displays would be very handy, though as I have never used one I don’t know how hard that would be to implement.

A lot of work would be needed in order to get to the point of having a full desktop environment that worked this way. We’d need toolkits for creating apps with intuitive, uniform user interface controls. Many sounds would be needed to create a rich sound scheme for conveying events and application status to the user. Just like how graphical apps need a display server, we’d also need an audio user interface server that would tie all the apps together, letting users multitask without their apps talking over each other or otherwise interfering. We’d need plenty of apps that would actually be designed to work in an audio-only environment. A text editor, terminal, and web browser are the first things that spring to mind, but email, chat, and file management applications would also be very important. There might even be an actually good use for AI here, in the form of an image “viewer” that could describe an image to the user. And of course, we’d need an actually good text-to-speech engine (Piper seems particularly promising here).

This is a pretty rough overview of how I imagine we could make the world better for visually impaired computer users. Much remains to be designed and thought about, but I think this would work well. Who knows, maybe Linux could end up being easier for blind users to use than Windows is!

Interested in helping make this happen? Head over to the Aurora Aural User Interface project on GitHub, and offer ideas!

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on August 11, 2024 08:41 AM

August 10, 2024

For Additional Confusion

Benjamin Mako Hill

The Wikipedia article on antipopes can be pretty confusing! If you’d like to be even more confused, it can help with that!

on August 10, 2024 03:56 PM

August 08, 2024

The Xubuntu development update for August 2024 features Xubuntu 24.10, "Oracular Oriole," featuring Xfce 4.19, and many more updates.

The post Xubuntu Development Update August 2024 appeared first on Sean Davis.

on August 08, 2024 12:27 PM

August 04, 2024

The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier.

Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can’t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later.

So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow.

DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now.

Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough…) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but:

If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it!

Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look! 😃

on August 04, 2024 06:54 PM

Thankfully no tragedies to report this week! I thank each and everyone of you that has donated to my car fund. I still have a ways to go and could use some more help so that we can go to the funeral. https://gofund.me/033eb25d I am between contracts and work packages, so all of my work is currently for free. Thanks for your consideration.

Another very busy week getting qt6 updates in Debian, Kubuntu, and KDE snaps.

Kubuntu:

  • Merkuro and Neochat SRUs have made progress.
  • See Debian for the qt6 Plasma / applications work.

Debian:

  • qtmpv – in NEW
  • arianna – in NEW
  • kamera – experimental
  • libkdegames – experimental
  • kdenetwork-filesharing – experimental
  • xwaylandvideobridge – NEW
  • futuresql – NEW
  • kpat WIP
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs kamera, kio-extras
  • kio-extras – Blocked on kdsoap in which the maintainer is not responding to bug reports or emails. Will likely fork in Kubuntu as our freeze quickly approaches.

KDE Snaps:

Updated QT to 6.7.2 which required a rebuild of all our snaps. Also found an issue with mismatched ffmpeg libraries, we have to bundle them for now until versioning issues are resolved.

Made new theme snaps for KDE breeze: gtk-theme-breeze, icon-theme-breeze so if you use the plasma theme breeze please install these and run

for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-breeze:icon-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-2-themes; done

This should resolve most theming issues. We are still waiting for kdeglobals to be merged in snapd to fix colorscheme issues, it is set for next release. I am still working on qt6 themes and working out how to implement them in snaps as they are more complex than gtk themes with shared libraries and file structures.

Please note: Please help test the –edge snaps so I can promote them to stable.

WIP Snaps or MR’s made

  • Juk (WIP)
  • Kajongg (WIP problem with pyqt)
  • Kalgebra (in store review)
  • Kdevelop (WIP)
  • Kdenlive (MR)
  • KHangman (WIP)
  • Ruqola (WIP)
  • Picmi (building)
  • Kubrick (WIP)
  • lskat (building)
  • Palapeli (MR)
  • Kanagram (WIP)
  • Labplot (WIP)
  • Ktuberling (building)
  • Ksudoku (building)
  • Ksquares (MR)
on August 04, 2024 12:35 PM

August 03, 2024

Gogh

Dougie Richardson

Check out these awesome terminal themes at http://gogh-co.github.io/Gogh/

on August 03, 2024 11:57 AM

July 30, 2024

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_tmp && cd d-i_tmp
$ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the official (daily) mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes:

$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/mini.iso
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/initrd.gz
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/linux

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd .
$ cp /usr/share/OVMF/OVMF_VARS_4M.fd .
$ qemu-img create -f qcow2 ./data.qcow2 20G

Finally, let’s launch the debian-installer using a preseed.cfg file, that will automatically install Netplan (netplan-generator) for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. the interactive Python CLI), as well as the netplan-generator package and systemd-resolved, to show the full Netplan experience. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the official debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/netplan-preseed+full.cfg
$ qemu-system-x86_64 \
	-M q35 -enable-kvm -cpu host -smp 4 -m 2G \
	-drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
	-drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \
	-device qemu-xhci -device usb-kbd -device usb-mouse \
	-vga none -device virtio-gpu-pci \
	-net nic,model=virtio -net user \
	-kernel ./linux -initrd ./initrd.gz -append "url=$U" \
	-hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished, you can reboot into your virgin Debian Sid/Trixie system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was modified by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd
$ qemu-system-x86_64 \
        -M q35 -enable-kvm -cpu host -smp 4 -m 2G \
        -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
        -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \
        -device qemu-xhci -device usb-kbd -device usb-mouse \
        -vga none -device virtio-gpu-pci \
        -net nic,model=virtio -net user \
        -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \
        -device virtio-blk-pci,drive=disk0,bootindex=1
        -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, find us at GitHub:netplan.

on July 30, 2024 04:24 AM

July 14, 2024

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]
on July 14, 2024 05:03 PM

July 04, 2024

 

Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).

Who is Affected?

Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.

What to Do?

  1. Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.

  2. Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.

  3. Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.

Optional: IP Whitelisting for Increased Security

Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:

  • ufw for Ubuntu

  • firewalld for AlmaLinux or Rocky Linux

Additional Resources

About Fail2ban

Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.

Installation Instructions:

  • Ubuntu: sudo apt install fail2ban

  • AlmaLinux/Rocky Linux: sudo dnf install fail2ban


About DevSec Hardening Framework

The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.


Prepare by LinuxMalaysia with the help of Google Gemini


5 July 2024

 

In Google Doc Format 

 

https://docs.google.com/document/d/e/2PACX-1vTSU27PLnDXWKjRJfIcjwh9B0jlSN-tnaO4_eZ_0V5C2oYOPLLblnj3jQOzCKqCwbnqGmpTIE10ZiQo/pub 



on July 04, 2024 09:42 PM