Dear DNF users, it is an honor for me to introduce to you our newly implemented mark command. If you are now wondering what was the motivation behind implementing this feature and also expecting some tidbits from DNF development, you are in the right spot. So prepare your favorite hot beverage and stay tuned. The story now begins: In the early days of DNF development, the original members of the team decided that the cool feature called clean_requirements_on_remove should have been enabled by default . This is exactly that feature of DNF which prevents your system from overblooting by installed, but no longer needed dependencies of packages. Unfortunately, the world of rpm distribution is not always as bright and shiny as it might seen for the first look. There are situations where a manual user intervention is necessary, so lets take a look at a few of them: PROBLEM: I used package manager incompatible with DNF for installation of packages A and B. Consequently DNF wants to autoremove these packages. SOLUTION: Packages that user wants to preserve in his system can be marked as installed by user via following command dnf mark install A B. PROBLEM: I installed package A alongside with his dependencies B and C. Now I want to remove package A but keep the package C. SOLUTION: Mark package C as user installed by dnf mark install C. PROBLEM: I installed package A and consequently installed package B which depends on package A. Now I decided to remove package A but it is not possible without removal of package B. SOLUTION: Mark package A for removal by dnf mark remove A and package A will be autoremoved once there are no packages dependent on package A. Thank you for your attention.
The new release of DNF and DNF-PLUGINS-CORE is coming to Fedora stable repositories. The `--downloadonly` option supported in yum is now available in DNF and repoquery from DNF-PLUGINS-CORE has extended it's functionality of reverse RPM tag queries (`--what*`) for glob patterns. Aside from that nearly 20 bug fixes have been made in this DNF stack release. For further details look at DNF and DNF plugins release notes.
tl;dr: we have started to use an internal Jenkins instance in combination with a public Copr - please update the URLs of DNF nightly builds repos and do the same with your project ;-D I thought that it might be a good idea to incorporate all the ideas I had on my TODO list into our continuous integration process before I leave the DNF team. I would say that this effort was quite successful and that the process has improved a lot. I believe that it might be interesting (or even inspirational) for you to know how it works. Originally, we started with a Jenkins job hosted by Fedora infrastructure which build RPMs on every new commit pushed upstream. Later, it turned out that it would be really nice if it would test submitted pull requests as well. There is a Jenkins plugin for almost anything you can come up with. That goes for GitHub pull requests as well. Unfortunately, you have to ask Fedora infrastructure to install any additional plugin you need. And what is worse, the plugin is designed so that the GitHub credentials must be configured globally. But we didn't want to provide access to our repository to anyone who use the same Jenkins instance. Since we were a bit impatient to wait whether the plugin can be changed, Michal succeeded to install Jenkins on our internal OpenStack instance. This change allowed us to structure (hence speed up) the continuous integration process a bit. I mean, we do not build just RPMs of DNF. We build also hawkey and the core DNF plugins and even two more dependencies - librepo and libcomps. Among other things, this allows us to develop against the most recent versions of these libraries and it also provides us with an additional assurance that a new version of these libraries will not break DNF. You can imagine that building RPMs of five projects for two architectures and for multiple versions of Fedora (sequentially) can take some time. With our own Jenkins instance, we don't need to be ashamed to create 5 different Jenkins jobs where each is split to two sub-jobs, one for each architecture. Moreover, one job can be configured as an upstream of another job so that e.g. if a new build of hawkey was successful, Jenkins may trigger a build of DNF to test whether the new version of hawkey did not break DNF. A nice side effect of this split is also that e.g. a new change in DNF does not trigger a new build of hawkey - that means less builds at the same time, hence faster builds. At the same time, something has happened with the hosted Jenkins. All DNF builds started to fail there. In the Job configuration, there is no "Multiple SCMs" option any more. I suspect, that the Multiple SCMs Plugin broke after an update or they have uninstalled this plugin. This proved to be another advantage of having an own Jenkins instance along with the fact that Fedora infrastructure does not guarantee availability of their Jenkins instance. This issue have caused another problem. Originally, with the launch of the continuous integration process, we also promised users to provide nightly builds of DNF. With a failing public Jenkins and a succeeding but private Jenkins, users have lost the access to the nightly builds. Luckily, the people around Copr, have recently added the possibility to upload SRPMs to Copr. Copr is most likely the best place were to host RPMs of project snapshots. So, we decided to use Copr to build RPMs on every Git change (and the pull requests as well). This again sped up the build, allowed us to build on more architectures and we also got the best public hosting for our nightly builds. To sum it up, our continuous integration have transformed a lot. Our internal Jenkins instance currently watches our GitHub repository (including the pull requests), on every change, it builds the SRPM of the appropriate component (hawkey, DNF, core DNF plugins, librepo, libcomps) using tito (in most cases), then it uploads the source RPM to Copr which builds the binary RPMs (using the RPMs from the previous builds), reports the result through emails and/or GitHub API and potentially triggers builds of the other dependant projects. If I gained your attention, I would be honoured, if you would like to take a look at the CI script here: https://github.com/rpm-software-management/ci-dnf-stack. You can find some instructions to set your own Jenkins job up there as well. Please note that the ability of tito to upload a SRPM to Copr is coming soon. Then you probably won't need this script any more. Since the new approach works very good, we are going to disable the job at http://jenkins.cloud.fedoraproject.org. If you are still interested in using nightly builds of DNF (at your own risk!), please, enable Copr rpmsoftwaremanagement/dnf-nightly, e.g. using:
dnf copr enable rpmsoftwaremanagement/dnf-nightly
Good News everyone, after another 3 weeks, new versions of DNF and DNF-PLUGINS-CORE have been released. DNF 1.1.1 brings mark command feature while DNF-PLUGINS-CORE adds 4 new filters from "dnf list" command and extending functionality of `--arch` and `--tree` switches. Additionally around 15 bugs have been fixed. For more detailed information about the releases see DNF and DNF plugins release notes.
Another crucial release of DNF is out with a lot of new features and over 20 bug fixes. Basic control mechanism for weak dependencies was added. Now you are able to query for all weak dependencies forward and backward way in repoquery and allow/disallow installing weak dependencies through `install_weak_deps` DNF configuration option. Moreover with all DNF stack you will be able to take advantage of rich dependencies in F23 along with newly added MIPS architecture support. Although DNF team encourages users to "do the things in the right (DNF) way", we still listen to community and implement the most requested features, the DNF is missing from yum. I am talking about `--skip-broken` in install command. To install the biggest given set of packages without raising error, when all dependencies cannot be satisfied, could be achieved by setting `strict` DNF configuration option. More information can be found in DNF and DNF plugins release notes. Enjoy this release and look forward to the next version.