Opened 14 years ago

Closed 13 years ago

#181 closed task (fixed)

Create RPM installer

Reported by: Dmitry A. Kuminov Owned by:
Priority: major Milestone: Qt 4.7
Component: General Version: 4.6.2
Severity: low Keywords:
Cc:

Description

There is a decision to distribute Qt only in RPM format. Two reasons:

  1. WarpIn? and RPM are competing and should not co-exist as this creates a mess (file duplication, version conflicts and so on) on the user machine.
  1. Doing it RPM-only will motivate users to switch to RPM ASAP as Qt is usually a highly wanted piece of software.

Change History (19)

comment:1 Changed 14 years ago by Dmitry A. Kuminov

Type: defecttask

More details about RPM are here http://svn.netlabs.org/rpm. Yuri released a WarpIn? installer for boostrapping RPM a couple of days ago. I will test it today.

comment:2 Changed 14 years ago by Dmitry A. Kuminov

To make it clear why we want to favor RPM and drop WarpIn?.

Rudi wrote in #172:

About RPM: I'm not so sure that average users will see it as an enhancement. There is no daubt that it is much more powerful than WarpIn?, but users see a relatively small and fancy GUI installer beeing replaced by a command line monster that only experts can handle. Many will consider that as a step backwards as long as we don't provide a graphical interface as simple and "sexy" as WarpIn?.

No doubt, WarpIn? has a nice GUI (especially comparing to those ugly installers IBM has always had) but it simply can't do the job well anymore. It's too weak feature-wise and contains a number of bug-o-features that prevent it from using for such complex software as Qt is. Its main weaknesses are these:

  1. The lack of the package upgrade feature. Typically a new version of a package Foo cannot co-exist with the previous version of the same package due to DLL conflicts and other resource conflicts ("typically" here means "almost always"). With WarpIn?, this situation can only be resolved manually: by first deleting the previous version and then installing a new version. Users frequently don't care or simply forget doing this which leads to broken installations all over there which are bug-prone and hard to fix. RPM solves this nicely by automatically uninstalling the old version before installing the new one, while keeping the configuration files intact. More over, in RPM, you may do things like saying that the new version of your package Foo requires package Bar to be replaced with package BetterBar?. This is simply impossible in WarpIn? at all. And so on.
  1. The weak package dependency mechanism. WarpIn? packages encapsulate all dependency information in them and this information is not available until a package is downloaded to the user machine and attempted to install. As a result, it's impossible to build the full dependency tree for a given package until all its dependencies are downloaded and installed. In other words, if you only have a package that you want to install, you will never know in advance what else this package needs (until you find a README.TXT file somewhere in the Universe that simply lists all its dependences rolled out) and therefore you will never be able to install it. Due to the concept of the centralized package database, RPM handles this very well. And, more over, it (with the help of front-ends like yum) can automatically download the correct versions of the missing packages from the software repositories located on the Internet which is simply a must feature nowadays as it saves the user a whole lot of time and effort.

comment:3 Changed 14 years ago by Dmitry A. Kuminov

Next, saying that RPM is a "command line monster that only experts can handle" is simply not true. It is very easy to use after you learn its few basic command line options. And thanks to smart command line front-ends like yum, it's even easier. Everything the user has to do to install the latest version of the software package named 'my-cool-program' is to run:

yum install my-cool-program

This command will download the package itself, all its dependencies, and automatically install and configure them all at one single step. That couldn't be simpler.

Uninstalling is simple as well:

yum remove my-cool-program

AFAIR, it can even be told to also remove all the dependencies of this package installed for it during the install phase.

Also, there is another command line front-end to RPM, called apt-rpm. It is a famous apt tool used in popular Debian Linux distributions (such as Ubuntu) modified to support RPM instead of DPKG. It provides the same very simple interface as yum.

And of course there is a number of GUI RPM front-ends. We will for sure port one later. Given how simple it is to install RPM packages from the command line, this is not the first priority task at present.

comment:4 Changed 14 years ago by Dmitry A. Kuminov

Just for the record. I searched for RPM GUIs a bit and discovered an interesting thing. The modern tendency to generalize and unify things has reached the world of software installer tools as well. First, I found a PackageKit project that claims to be an universal software installer service that supports most modern tools like yum and apt. It has a lot of GUI front-ends which we should check once (in terms of which ones we can build on OS/2).

comment:5 Changed 14 years ago by Dmitry A. Kuminov

The Qt RPMs in fact require some other packages. These are the requirements for the binary Qt packages:

  • libc (already have).
  • cups (optional, for printing support).
  • openssl (optional, for SSL support, already have).
  • mysql (optional, for MySQL support).
  • postgres (optional, for Postgres support).

Cups and openssl are a must (as they provide essential functionality), so we have to do cups at least.

The thing which is still unclear to me is what to do with source packages (SRPMs) for Qt. As far as I understood (still reading the RPM Guide), you can't have RPMs without SRPMs. For source packages, we need the following (all mandatory):

  • gcc (already have)
  • wlink (patched)
  • lxlite
  • libc-devel (already have)
  • cups-devel
  • openssl-devel (already have)
  • mysql-devel
  • postgers-devel

So it turns out that we must provide at least development RPMs for cups, mysql and postgres if we want to have these options enabled in the binary Qt RPMs we provide. We also must provide RPMs for lxlite and wlink but it's even more unclear what to do with SRPMs for them as we don't have the wlink sources and lxlite probably needs VisualAge? C rather than GCC.

comment:6 Changed 14 years ago by Dmitry A. Kuminov

Taking the amount of work necessary to do all these packages (and the work we need to do for the RPM bootstrap package, see my mail), it sounds unrealistic to release Qt tomorrow (the current deadline is 22 Oct). I guess we need an extra week at least.

comment:7 Changed 14 years ago by ydario

I think some misunderstanding is here.

A source package SRPMS is required only to rebuild a package, not to build it from scratch. In both cases the build requires clause must be satisfied, but it is not mandatory to have a RPM for required libraries: e.g. if you need mysql.lib and it is in the /usr/lib directory, gcc will be happy and rpmbuild will not complain for it (in this case a BuildRequires?: mysql-devel must not be present in spec file).
Rpmbuild simply drives the build system using rules in .spec file, no rules no checks.

mysql/postgres could be a bigger problem for installation: without a mysql/postgres package, you cannot set them as required to allow installation of Qt modules, so the user must have the choice to install them without checking for dependancies. This means a Requires: mysql, postgres cannot be added to binary rpm. Users will be allowed to install Qt mysql support regardless of effective mysql installation.

I must say that building a RPM package to ship the mysql dll (only) is straightforward, I already do this for ash/bash.

comment:8 Changed 13 years ago by Dmitry A. Kuminov

In order to better fit into the Unix-like directory structure, I will change the way the system-wide Qt library configuration file is searched.

First, I will add the -prefix, -bindir, etc. options to configure.cmd (as on Linux) that allow to set what QLibraryInfo returns as library paths. With these options, the RPM build script will set these hard-coded paths to $(UNIXROOT)/usr/lib/qt4 etc.

Next, in order to support Qt4 package relocation (installation to a different directory tree), qt.conf will still be created by the post-installation script whose contents points to the installed location (note that this file is called qtsys.conf in Qt 4.6.2 and this name will remain supported for backward compatibility). This file will be searched in the following locations, in order:

  1. %UNIXROOT%\etc\qt.conf (a value for "etc" will actually be taken at RPM build time using the sysconfdir variable, which is normally just this).
  2. %ETC%\qt.conf (for backward compatibility).

comment:9 Changed 13 years ago by Dmitry A. Kuminov

QLibraryInfo is fixed in r813. configure.cmd is enhanced in r815.

comment:10 Changed 13 years ago by Silvan Scherrer

Milestone: Qt 4.6.3Qt 4.7

comment:11 Changed 13 years ago by Dmitry A. Kuminov

We're postponing RPM till 4.7 because it is not yet stable enough.

We may also re-release 4.6.3 as RPM if RPM becomes stable long before 4.7 is ready.

comment:12 Changed 13 years ago by Dmitry A. Kuminov

A few words about probably the most frequent complaint to RPM from the OS/2 users -- hard-coded installation paths that normally cannot be changed. At first sight, this looks really annoying to a person that used to freely select what drive and directory he wants the software to live in on his system which was a typical way in DOS, OS/2 and still is in Windows. In this approach, users usually dedicate volumes or even the whole hard disks to a specific task: this is my boot drive, this a drive for applications and data I use, this one is for games and that one is for archives. There are almost no restrictions to the way this separation is done so many people do it according to their own needs and experience. This formed a practice where the software developer had to take possible differences in storage layout into account and make his application flexible enough to live on different drives and in different directories.

In Unix where RPM came from, there is no concept of drive letters at all. Instead, they have a single file system where all storage (among other non-storage things like processes) is mapped to. Due to this fact, a more-or-less common directory structure of this single file system was formed so people don't have to learn each new system from scratch (nowadays there is even a standard for this system called FSH). As a result, many applications use hard-coded paths to refer to their data and other system resources.

While the first approach looks more flexible, it works great only when the user knows the OS and all the software he uses (and their storage requirements) very well so that he can create an optimal drive setup for his system. This is however not true nowadays where the amount of software a regular user needs as well as the complexity of this software has increased dramatically, while the average level of knowledge of this user has dropped down. The user wants it to "just work" and doesn't want to care a lot about where to install the stuff to. Given the today's complexity of things, offering him the freedom to choose gives the developers an endless amount of combinations to care about, which in turn increases the developing costs (and the possibility of bugs in the final product). From this point of view, having strict installation paths not controllable by the user is an improvement: the developer has only one structure to support and the user himself doesn't need to spend time and brain cells on house keeping (there are usually much better things to do).

However, the user is not a monkey of course, and there still are things that he wants to be able to decide on. If we focus on that, we will find out that the user basically has a couple of simple requirements when it comes to storage (or even PC maintanance in general):

  1. Reserve enough space for his current needs when setting up a brand new PC.
  2. Have an easy way to recover after a system crash which includes getting the same environment (applications and settings) as he had before plus keeping all user data he created with the software he uses.


The strict FHS system fits these requirements quite well. And thanks to the path rewrite feature in kLIBC, we may map it nicely to OS/2. Here is a typical layout of a planned future OS/2 system which is completely installed with RPM:

Drive C:/ (root) This is a boot drive where all stock software such as OpenOffice?, Firefox, all DLLs and other system stuff is installed. The size of this drive is quite fixed, and doesn't change a lot over the years. For example (from my experience), for Ubuntu, 5G is a minimum for a user system and 8G is a minimum for a developer system.
Drive D:/home This is a normal drive where all user settings and documents are stored (in a multi-user enviromnent, the structure will be /home/user1, nome/user2). The more data the user plans to have, the more size it should be. Later, he may add more drives and map them to /home/user1/volume2, volume3 etc.
Drive E:/mnt/share This is a shared drive accessible by all users and used for common stuff, archives, etc.
Drive F:/usr/local This is a zone for custom software installations (for software not coming from stock reositories). Needed mostly for developers.
Drive G:/var This is a zone for system-wide software data like global data caches etc, system-wide application settings and so on. Needed mostly on servers.

The first two mappings are the must, all other (and any additional ones) are optional and depend on user tasks. Given this layout, the system crash may be easily recovered by simply reinstalling the OS from scratch (e.g. from DVD) to Drive C: and then installing all applications the user needs. When using RPM, both tasks are very simple: the first one doesn't require anything more than setting up your name and a time zone; the second one may be done with a single command (all the software will be downloaded from the Internet and installed completely automatically with no user interaction at all). All data remains on Drive D: (which is periodically backed up according to user needs) and gets immediately picked up by the freshly installed software.

An OS/2 user may ask: why should he install the software on the same drive where the bootable OS is? OS/2 is really small (500MB is usually enough for the OS itself), so that in case of a crash, he could quickly restore the boot drive contents from a backup ZIP file and have the software survived the crash and ready to use right away if it would reside on a different drive... The answer is that given the today's software which has a complex setup procedure (much more complex than the copy operation), reinstalling/restoring the OS will kill this setup anyway (because most bits of it go to the boot drive: look at config.sys statements, WPS objects, registry settings, system-wide text configuration files, system plugin DLLs and so on) and the software will need to be reinstalled too, despite the fact it's on the different drive -- just to refresh its integration with the newly installed OS.

Another OS/2 user may say that he is a developer and therefore he needs to install a lot of development libraries that he doesn't want to be messing with his clean and nicely set up boot partition. The answer to this is that the development libraries coming from the stock repositories are not really intended for developing -- they should be only used for building the final versions of software to be put to these stock repositories (in order to have proper DLL dependencies and such). Such "stock" development packages play the same rules, including the tight integration with the boot partition, and don't really mess with it in any way. The only thing they require is a bigger boot partition but that's usually known (and therefore may be solved) at a time where the PC is initially set up. The libraries the developer actually uses for his day-to-day work are usually custom debug builds made directly from sources. In the above scheme, they go to /home/user1/Development and therefore don't interfere with the boot drive too.

I hope this information will help those OS/2 people that don't have any positive Unix/Linux? experience and not aware of the way how the software is managed there. I believe that implementing the described scheme will be a big improvement for OS/2, and for old-school users it's just a matter of time to get used to it. We will continue discussing the migration scenarios, problems and their possible solutions.

PS. AFAIR, Mapping / to an arbitrary location doesn't work in kLIBC at the moment. This needs to be discussed with Knut and fixed in some way.

comment:13 Changed 13 years ago by Dmitry A. Kuminov

One important addition to the mapping scheme above. By default, all directories reside on the boot partition, so if the table says "needed mostly for servers", it just means that if you do not dedicate a special drive for the /var mapping it will live on Drive C, it doesn't mean that /var is not needed on non-server machines :)

comment:14 Changed 13 years ago by losepete

Hi dmik

I read the drive layout argument with interest. While it should standardise software installation I suspect you may run into some resistance with this when you try to implement it - seems to me that a lot of eCS-OS/2 users usually want to boss their systems around and decide what goes where.

1 of your arguments contradicts itself I think. Have a read of the paragraph starting "An OS/2 user may ask: why should he install the software on the same drive where the bootable OS is?"

In that paragraph you discuss using a Backup to recover a failed system on the boot drive and later state that the user will have to reinstall apps anyway due to lines required in config.sys and other configuration files. Surely a good recent Backup will have most of those - the only app(s) to reinstall would be those installed *since* the last Backup.

comment:15 Changed 13 years ago by Dmitry A. Kuminov

losepete, please have my apologies for the long silence. Thank you very much for your feedback, I saw your comment, took it into account but unfortunately I didn't have an opportunity to answer it in time (the whole RPM thing was postponed back then due to the lack of time). I will answer you in a while here.

Now back to the task.

To refresh the RPM stuff in my head, I started with a somewhat simpler task to create RPM packages for Odin. Here you may see what I did for now: http://svn.netlabs.org/odin32/changeset/21678. It's basically a template that may be reused by all our other projects. I will turn it into the complete RPMs now.

comment:16 Changed 13 years ago by Dmitry A. Kuminov

I need to change the way how default paths to Qt components are defined. Now, if no explicit qt.conf file is present along QtCore4.dll, all paths are calculated relative to the location of QtCore4.dll (e.g. "./doc", "./include" etc.). This makes no sense any more and it will be better if they are relative to the parent directory instead, i.e. "../doc", "../include" etc.

This will make it possible for the ZIP distribution to work w/o taking care of qt.conf at all which is just perfect for a portable distribution.

In case of the RPM distribution, Qt is not stored in a single tree so it will be necessary to specify some paths in qt.conf anyway.

comment:17 Changed 13 years ago by Dmitry A. Kuminov

Did the above in r1030 (with a build fix in r1032).

Now, all is simple.

  1. Future official release builds (RPM) will have component paths hard-coded into QtCore4.dll, according to the system locations where RPM installs them to (as set by configure.cmd). No qt.conf is necessary.
  1. Hard-coded paths in development builds (produced by configure.cmd by default) will have everything pointing to the parent of the directory containing QtCore4.dll -- this is where stuff resides right after the build (shadow or not). No qt.conf is necessary.
  1. Official portable ZIP distributions (made out of the same binaries as in RPM) will provide qt.conf (located along with QtCore4.dll) that will overwrite hard-coded paths to point to the parent directory containing QtCore4.dll (where all stuff will reside in the ZIP distribution).

This way, qt.conf is only necessary for the portable ZIP (it was necessary in all three cases before).

This makes support for system-wide qt.conf files unnecessary so it was completely dropped.

Last edited 13 years ago by Dmitry A. Kuminov (previous) (diff)

comment:18 Changed 13 years ago by Dmitry A. Kuminov

The tree is frozen now, no new fixes for Qt itself. I'm rebuilding the RPMs.

comment:19 Changed 13 years ago by Dmitry A. Kuminov

Resolution: fixed
Status: newclosed

This is fixed 2 days ago -)

Note: See TracTickets for help on using tickets.