Spacewalk 1.7 - Not Generating Repo Metadata

For more information on Red Hat's Spacewalk, visit


Managing Linux updates can be a nightmare if it is not centralized. Spacewalk, RedHat's open source Satellite alternative can solve that problem by allowing administrators to push multiple updates to multiple servers using it's sleek easy to use interface. Being that most of the boxes in my current work environment are CentoOS 5 and CentOS 6 servers we decided to embark on such the  endeavor to install a spacewalk server to centralize the update process for these servers.

After installing the server, I installed the spacewalk client tools on one of our CentOS 5 servers. Basking in excitement I pulled the trigger to perform the first simple test of doing a yum clean all and yum update on the client only to find the following error greet us:

Error: Cannot retrieve repository metadata (repomd.xml) for repository: centos_5_base_x86_64.
Please verify its path and try again.

And so the troubleshooting begins...



Finding the Source:

It took a little bit of digging to discover that the "Taskomatic" daemon is responsible for the metadata generation on the spacewalk server. Restarting the Taskomatic daemon and tailing the /var/log/rhn/rhn_taskomatic_daemon.log file resulted in the following java dump:

STATUS | wrapper  | 2012/03/21 02:14:26 | TERM trapped.  Shutting down.
STATUS | wrapper  | 2012/03/21 02:14:28 | -- Wrapper Stopped
STATUS | wrapper  | 2012/03/21 02:14:29 | -- Wrapper Started as Daemon
STATUS | wrapper  | 2012/03/21 02:14:29 | Launching a JVM...
INFO   | jvm 1    | 2012/03/21 02:14:29 | Wrapper (Version 3.2.3)
INFO   | jvm 1    | 2012/03/21 02:14:29 |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
INFO   | jvm 1    | 2012/03/21 02:14:29 |
INFO   | jvm 1    | 2012/03/21 02:14:33 | Mar 21, 2012 2:14:33 AM com.mchange.v2.log.MLog clinit
INFO   | jvm 1    | 2012/03/21 02:14:33 | INFO: MLog clients using java 1.4+ standard logging.
INFO   | jvm 1    | 2012/03/21 02:14:33 | Mar 21, 2012 2:14:33 AM com.mchange.v2.c3p0.C3P0Registry banner
INFO   | jvm 1    | 2012/03/21 02:14:33 | INFO: Initializing c3p0- [built 06-August-2008 15:35:00; debug? false; trace: 5]
INFO   | jvm 1    | 2012/03/21 02:14:33 | Mar 21, 2012 2:14:33 AM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager
INFO   | jvm 1    | 2012/03/21 02:14:33 | INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@9e45ab16 [ connectionPoolDataSource com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@2c44367a [ acquireIncrement 3, acquireRetryAttempts 30, acquireRetryDelay 1000, autoCommitOnClose false, automaticTestTable null, breakAfterAcquireFailure false, checkoutTimeout 0, connectionCustomizerClassName null, connectionTesterClassName com.mchange.v2.c3p0.impl.DefaultConnectionTester, debugUnreturnedConnectionStackTraces false, factoryClassLocation null, forceIgnoreUnresolvedTransactions false, identityToken 2tvxpj8meec813kwprd6|1230173b, idleConnectionTestPeriod 300, initialPoolSize 5, maxAdministrativeTaskTime 0, maxConnectionAge 0, maxIdleTime 300, maxIdleTimeExcessConnections 0, maxPoolSize 20, maxStatements 0, maxStatementsPerConnection 0, minPoolSize 5, nestedDataSource com.mchange.v2.c3p0.DriverManagerDataSource@492d7c91 [ description -null, driverClass null, factoryClassLocation null, identityToken 2tvxpj8meec813kwprd6|666f258a, jdbcUrl jdbc:postgresql://, properties {user=******, password=******, driver_proto=jdbc:postgresql} ], preferredTestQuery null, propertyCycle 0, testConnectionOnCheckin false, testConnectionOnCheckout true, unreturnedConnectionTimeout 0, usesTraditionalReflectiveProxies false; userOverrides: {} ], dataSourceName null, factoryClassLocation null, identityToken 2tvxpj8meec813kwprd6|18b77860, numHelperThreads 3 ]

FATAL  | jvm 1    | 2012/03/21 02:14:36 | Failure occured during job recovery.
com.redhat.rhn.taskomatic.core.TaskomaticException: Failure occured during job recovery.
at com.redhat.rhn.taskomatic.core.SchedulerKernel.startup(
at com.redhat.rhn.taskomatic.core.TaskomaticDaemon$

Caused by: org.quartz.SchedulerConfigException: Failure occured during job recovery. [See nested exception: org.quartz.JobPersistenceException: Couldn't retrieve trigger: 2 [See nested exception: java.lang.ArrayIndexOutOfBoundsException: 2]]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(
at org.quartz.core.QuartzScheduler.start(
at org.quartz.impl.StdScheduler.start(
at com.redhat.rhn.taskomatic.core.SchedulerKernel.startup(
... 2 more

Caused by: org.quartz.JobPersistenceException: Couldn't retrieve trigger: 2 [See nested exception: java.lang.ArrayIndexOutOfBoundsException: 2]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverMisfiredJobs(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverJobs(
at org.quartz.impl.jdbcjobstore.JobStoreSupport$2.execute(
at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverJobs(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(
... 5 more

Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at org.postgresql.util.PGbytea.toBytes(
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getBytes(
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getBytes(
at org.apache.commons.dbcp.DelegatingResultSet.getBytes(
at org.quartz.impl.jdbcjobstore.PostgreSQLDelegate.getObjectFromBlob(
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectTrigger(
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(
... 13 more

Postgres Connector:

A little google-fu of the last exception resulted in a few hits referencing the Postgres connector being used by the spacewalk server. I first checked to ensure that the connector was in the appropriate place:

ls -lah /usr/share/java/postgresql-jdbc*
/usr/share/java/postgresql-jdbc.jar  postgresql-jdbc-8.4.701.jar

Looking at the results, I noticed that the Postgres jdbc driver was the 8.4 version of the driver.   Knowing that we were running postgresql-9.1 on the DB server, I decided to update the jdbc driver to the latest version, and replace the symlink with the updated version:


cd /usr/share/java/
rm postgresql-jdbc.jar
rm: remove symbolic link 'postgresql-jdbc.jar'? y

ln -s postgresql-9.1-901.jdbc4.jar postgresql-jdbc.jar
ls -lah /usr/share/java/postgresql-jdbc.jar
lrwxrwxrwx 1 root root 28 Mar 21 02:23 /usr/share/java/postgresql-jdbc.jar

Restart and Verify:

After a quick restart of the satellite service (/user/sbin/rhn-satellite restart), I again tailed the logs and watched as I saw the following scroll though my log file:

INFO   | jvm 1    | 2012/03/21 02:39:08 | 2012-03-21 02:39:07,977 [Thread-53] INFO  com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter - Repository metadata generation for 'centos_5_base_x86_64' finished in 406 seconds

After all of the channels regenerated, I again tried a yum update on the client, and.... drumroll please... yum was happy and working as expected.

Post Requisites: