« Posts by Ant

Sending Attachments with the Javamail 1.4.x API

Make your emails interesting with attachments!

Not that your emails aren’t already interesting – if you have some kind of regular job running and you want to produce a results bound file sent to your recipients as an attachment, this code example can illustrate one way it can be done. It’s pretty much the same thing as sending a regular email except that it uses multipart attachments as the body content of the message:

package com.faceroller.mail;

public class Mailer {
	private static final Log log = LogFactory.getLog(Mailer.class);

	public static void send(Email email)
			throws MessagingException, NamingException, IOException {

		 * prefer the jndi lookup in your container, but when debugging
		 * manually setting properties explicitly will do

		// InitialContext ictx = new InitialContext();
		// Session session = (Session) ictx.lookup("java:/Mail");

		Properties props = (Properties) System.getProperties().clone();
		props.put("mail.transport.protocol", "smtp");
		props.put("mail.smtp.host", host);
		props.put("mail.smtp.port", port);
		props.put("mail.debug", "true");

		 * create the session and message
		Session session = Session.getInstance(props, null);

		 * set the message basics
		MimeMessage message = new MimeMessage(session);
		Message.setFrom(InternetAddress.parse(email.getFrom(), false)[0]);
			InternetAddress.parse(email.getTo(), false)

		 * multipart attachments here, part one is the message text, 
		 * the other is the actual file. notice the explicit mime type 
		 * declarations
		Multipart multiPart = new MimeMultipart();

		MimeBodyPart messageText = new MimeBodyPart();
		messageText.setContent(email.getBodyAsText(), "text/plain");

		MimeBodyPart report = new MimeBodyPart();
		report.setContent(email.getAttachmentAsText(), "text/xml");

		MimeBodyPart rarAttachment = new MimeBodyPart();
		FileDataSource rarFile = new FileDataSource("C:/my-file.rar");
		rarAttachment.setDataHandler(new DataHandler(rarFile));

		 * set the message's content as the multipart obj

		 * do the actual sending here
		Transport transport = session.getTransport("smtp");

		try {

			transport.connect(username, password);
			transport.sendMessage(message, message.getAllRecipients());

			log.warn("Email message sent");

		} finally {

You’ll notice the first multipart’s content is a String with the mime type “text/plain”, this is what will render this part as the message’s body. You can set as many parts as you want, each one defined as a separate attachment. If you want to attach rar or zipped up archive, you can use the activation libraries to include them in as one of the parts. The MimeBodyPart will automatically detect and fill in the mime type for the file – it’s provided by FileDataSource. In JBoss if you’re using the container’s mail service, you can configure the mail server properties in the deploy/mail-service.xml file, and then you can use the initial context to get a handle on that configured mail server.

Get the jars and supporting docs from the Javamail site here: http://java.sun.com/products/javamail

The Deploy vs Deployers directory, JBoss v5.x

Tae Bo for JBoss!

JBoss ships with a few configurations that are meant to provide examples of how JBoss can be configured for your environment. It’s recommend you take the “default” configuration (or “all” if you require clustering), and then slim it down by removing the various mbean components found in the “jboss/server/<configured instance>/deployers” and “jboss/server/<configured instance>/deploy” folders until only your minimum requirements are met. If you deploy JBoss with everything as it is, you’re going to end up wasting system resources on services that your application is going to use. For example, if your application doesn’t make any use of ejb2, then theres no reason to enable ejb2 implementation services or deployers in your instance. By removing these unnecessary pieces you’ll end up with a more optimized configuration, and you’ll end up getting the most bang for the buck from your installation. You’ll want to become familiar with each of the items in the deploy and deployers folders so that you can remove the services that would otherwise eat your cpu cycles while not providing any benefit.

The deploy directory

This deploy folder houses deployable services or applications meant to be run by the JBoss instance. To squeeze absolutely the most out of your installation you’ll want to be picky when deciding which services you end up leaving in. It should be noted that even though the JBoss instance is capable of hot deployments, its generally a bad idea to use Hot deploy in a production environment, its bound to cause wierd problems over time – items in memory might not be completely unallocated and may end up causing unexpected behavior in the long run. It’s also a good idea NOT to use HypersonicSQL as the defaultDS provider for your application. You probably do NOT want to queue up a massive 300k message queue into HSQL via the defaultDS, you might want to use something a little more enterprise level. Replace the defaultDS provider with your own vendor.

Here is an breif explanation of the items found in those directories:

  • deploy/admin-console.war – this is a useful utility you can use to inspect jboss and start/stop and administer your application and/or services. Not required to run your application but if you run into trouble, it can be handy for debugging or performance tuning
  • deploy/http-invoker.sar – this service allows you to invoke ejbs and make jndi calls through the http protocol. Calls are made in a similar way to: http://<hostname>:8080/invoker/EJBInvokerServlet. If you dont use this service, you can take this out.
  • deploy/jbossweb.sar – this is the tomcat integration service. If your application doesn’t have a web interface, then you can remove this safely.
  • deploy/jbossws.sar – this is the web service implementation. If your application does not use web services, or you choose to implement your own, you can safely remove this.
  • deploy/jmx-console – this is an application similar to the admin-console.war where you can go and inspect the server’s individual mbeans, view the jndi namespace (bound services, ejbs etc) view stack traces for active threads and other useful things like that. This is not required to run your app in a production environment but it can be extremely useful for debugging or performance tuning.
  • deploy/jmx-remoting.sar – this allows rmi access to jmx in jboss. Even if you remove this you’ll still have access to jmx if you haven’t removed jmx-console.war.
  • deploy/management/console-mgr.sar – this is supposed to provide a gui interface for managing the jboss application server. Haven’t used it much, so I cant say if its any good or not. This can be safely removed if you don’t intend to administer the server via a web gui.
  • deploy/messaging – the set of files in this directory are used for jms. Out of the box, its wired to use the hsql db that jboss comes with. If your application uses jms, you’ll want to remove hsqldb-persistence-service.xml and use the proper service.xml file that goes with your vendor. The right file can be found here: jboss-5.1.0.GA/docs/examples/jms. So if you use postgres you would use postgresql-persistence-service instead of hsqldb-persistence-service.xml. There are other places where you would need to update to completely remove the hsql dependency, but this particular fileset speaks to the jms implementation
  • deploy/profileservice-secured.jar – documentation says it allows web access to the profile service. I think this section allows you to administer the individual profiles, but I’m not completely sure it saves the configurations to disk, although I could be wrong.
  • deploy/ROOT.war – this is the default context application that comes with JBoss. I usually remove this completely so that I’m free to use the root context for my own application since there can be only one is use.
  • deploy/security – this is used for configuring security policies for the server
  • deploy/uuid-key-generator.sar – this service is a uuid generators, it is used by things like entity primary key sequence generators in ejb3 etc. If you change the DefaultDS name to something else, theres the META-INF/jboss-service.xml file has a reference you’ll need to change.
  • deploy/xnio-provider.jar – default configuration for some kind of remote jboss connector
  • deploy/cache-invalidation-service.xml – this is a service that allows you to invalide the ejb cache through jms, this is disabled by default so it can safely be removed without problem
  • deploy/ejb2-container-jboss-beans.xml, deploy/ejb2-timer-service.xml – these are used to support ejb 2.x in jboss. Remove if yor app doesn’t use ejb 2.x
  • deploy/ejb3-connectors-jboss-beans.xml, deploy/ejb3-container-jboss-beans.xml, deploy/ejb3-interceptors-aop.xml, deploy/ejb3-timerservice-jboss-beans.xml – these descriptors directly support ejb3 functionality in jboss, remove if your application doesn’t use ejb3
  • deploy/hdscanner-jboss-beans.xml – hot deploy support for jboss. Hot deployments are usually a bad idea for production environments, so take this out and save yourself the trouble
  • deploy/hsqldb-ds.xml – the hsql ds file, remove this and wire the DefaultDS jndi resource to your one production environment database
  • deploy/jboss-local-jdbc.rar – this is the JCA adaptor that implements the jboss datasource. Remove this if your application doesn’t use the DS files. Use
  • deploy/jboss-xa-jdbc.rar – this is the JCA adaptor that is required to support distributed transaction management (XA) datasource
  • deploy/jca-jboss-beans.xml – the jboss JCA spec implementation, this allows JCA adaptors like jboss-xa-jdbc.rar to function (enabling transaction aware jdbc calls)
  • deploy/jms-ra.rar – jms resource adaptor
  • deploy/jmx-invoker-service.xml – mbean service that configures remote access to jmx via rmi. You probably don’t need this unless your application requires remote jmx management
  • deploy/jsr88-service.xml – mbean descriptor for the jsr-88 remote deploy spec. This can be safely removed if your application does not use any jsr-88 spec deployments. More info on the spec can be found here.
  • deploy/legacy-invokers-service.xml – mbean descriptors for legacy remote jms invokers
  • deploy/mail-ra.rar – resource adapters for javamail. You can safely remove this if your app doesn’t make use of javamail.
  • deploy/mail-service.xml – mbean dessriptor for the localized jboss javamail configuration. Use this file to configure the mail server settings for receiving, sending, etc
  • deploy/monitoring-service.xml – mbean descriptors for jmx based server monitoring services, like the jmx-console.
  • deploy/profileservice-jboss-beans.xml – mbean descriptor that supports the jboss profile service. You probably don’t want to get rid of this since this piece should help configure your instance’s bootstrap and port settings.
  • deploy/properties-service.xml – mbean descriptors for the jboss properties services, allowing system properties to be loaded externally or remotely. You should be able to remove this safely if you don’t use the profile service in your application and don’t mind the default system properties
  • deploy/quartz-ra.rar – the quartz resource adapter. Remove if your application doesn’t make any use of quartz
  • deploy/remoting-jboss-beans.xml – mbean descriptors that support the jboss remoting project. More on this can be found here. You can remove this if your application doesn’t make use of any jboss remoting code.
  • deploy/schedule-manager-service.xml – mbean descriptors for the Java5 scheduler services. This can be configured for older pooled jmx based timers. I think this might be required for ejb3 timer support.
  • deploy/scheduler-service.xml – additional mbean descriptors for Java5 timers. I’m not convinced ejb3 timers require this to work, but if you’re not using ejb3 timer or the scheduling service you can safely remove this.
  • deploy/sqlexception-service.xml – mbean descriptor for handling vendor specific sql exceptions.
  • deploy/transaction-jboss-beans.xml – mbean descriptors enabling JTA transactions. This is required for ejbs transaction management. More info on the JTA spec can be found here.
  • deploy/transaction-service.xml – mbean descriptors for a jmx service that handles the jboss UserTransaction implementation for client applications
  • deploy/vfs-jboss-beans.xml – mbean descriptors for virtual file caching, used by the server for deployments. Probably a good idea to leave this in.

The deployers directory

The items in the deployers directory are used to aid jboss in its deploy capability. The ear, ejb, and seam deployers are examples of the types of artifacts jboss can deploy, and each one of these types need deployer descriptors configurations to enable their deployment capabilities. Remove some of these, and jboss will not deploy artifacts of the corresponding type. Chances are it’s OK with leaving most of these in. You’ll want to remove and leave the obvious ones and experiment with the ones you’re not sure about if you want to achieve the most bare boned and streamlined JBoss configuration. Here’s a brief rundown of these deployers:

  • deployers/bsh.deployer – enables bean shell script deployments. Remove if your application does not use beanshell scripts
  • deployers/ejb3.deployer – enables ejb3 deployers, remove if your application doesn’t use any ejb3.
  • deployers/jboss-aop-jboss5.deployer – enables the jboss 5 base aspects.
  • deployers/jboss-jca.deployer – enables JCA deployments. Keep this if your app makes use of JCA adapters like jboss-local-jdbc.rar
  • deployers/jbossweb.deployer – deploys web components like servlets and jsps
  • deployers/jbossws.deployer – deploys web service related endpoint components
  • deployers/seam.deployer – provides integration support for jboss seam
  • deployers/alias-deployers-jboss-beans.xml – supports alias deployment descriptors that install once the original is deployed
  • deployers/clustering-deployer-jboss-beans.xml – supports deployments for jboss clusters, you don’t need this if you’re not running in a clustered environment
  • deployers/dependency-deployers-jboss-beans.xml – adds support for dependency deployments like aliases
  • deployers/directory-deployer-jboss-beans.xml – legacy support for embedded deployable artifacts in folder deployments like embedded lib directories
  • deployers/ear-deployer-jboss-beans.xml – adds support for ear deployments
  • deployers/ejb-deployer-jboss-beans.xml – supports java 1.4 ejb deployments
  • deployers/hibernate-deployer-jboss-beans.xml – adds support for hibernate deployment descriptors and artifacts
  • deployers/jsr77-deployers-jboss-beans.xml – support for the JSR 77 spec, standard j2ee deployments. See more here
  • deployers/metadata-deployer-jboss-beans.xml – add support for reading in and deploying metadata in the form of annotations or xml metadata
  • deployers/security-deployer-jboss-beans.xml – supports deployment of security related configuration

Feeling 10 lbs lighter!

As you can see, JBoss is highly configurable and is therefore extremely flexible – try to leave in only what you need to get the most out of your install. Sure, you can just use the default install as it is out of the box, but you’ll be bloating your server with unnecessary services as well as possibly providing security holes that the savvy intruder might be able to exploit. Specifically, the admin-console/jmx-console store usernames and password defaults in a properties file – if you leave those alone, and don’t update or change them, you’ll be vulnerable to anyone who happens to be familiar with these defaults and how to access either console.

Jboss documentation on the “Default” configuration
JBoss 5.x Tuning/Slimming

Jars and Class Loading, Jboss v5.x

So where do I put all my jars?

As you write your applications you’re bound to leverage third party libraries to cut down on the amount of work; lets face it no one wants to reinvent the wheel. A downside is sometimes these third party libraries might not be the most mature or stable releases to date. As your product grows and matures, or you expand your client base or number of implementations, you’re bound to come across multiple third party library dependencies, even ones across the same library but different versions, what a headache! How can we organize these libraries in JBoss? Luckly we are provided with a few directories where you can stick library jars for use in your own application, here is a quick rundown:

  • jboss/client – this folder contains all the jar files used by any client application for jboss, for example a swing application that needs to commnicarte with a remote JBoss instance would need to have the jar files in this directory in its classpath or it will not be able to communicate correctly. Generally, you don’t stick third party libraries used by your application here unless you’re writing some kind of jboss client application and you’re extending the server’s functionality in some way.
  • jboss/common/lib – this folder is meant to hold jar files that should be made available to all JBoss instances. Jar files global to all applications go here.
  • jboss/lib – this folder holds all the jar files used in order for the server to run correctly. You don’t stick your libraries in this directory unless you’re extending or adding functionality to the JBoss server itself.
  • jboss/lib/endorsed – this folder holds all the endorsed libs that jboss uses where other implementations can be used to override. Xalan implementations other than the default go here if you want to override with a newer version. Since JBoss relies on these libraries also, be mindful that you might find xml parsing issues if you use an older xalan library (jboss uses many xml files for configuration)
  • jboss/server/<configured instance>/lib – this is where you put any instance specific jar files

So for the most part, unless you’re going to be tinkering with extending or modifying the JBoss server itself, you’ll want to stick to one of thee locations, the server global lib, the instance global lib, or the lib in your deployable artifact.

Possible scenarios

Ultimately, you’re going to need to make a decision on how you’re manage your third party libraries, and its all based on your particular setup and installed application base. The JBoss loaders can do teasingly mysterious things, as the order of precedence might not be completely obvious. The most easy pitfalls include more than one library being loaded, but of a different version. Which one gets loaded if there is more than one? The answer depends on the strategy you used in your setup, and figuring out the best strategy for your particular application(s) is paramount to minimizing this risk. First, lets look at the viable options:

If you want to make your application portable and completely self contained – you’ll want to package all your third party libraries in the right lib directory for your war or ear file. The benefits include more complete portability by becoming completely self contained deployable artifacts, and therefore minimizing immediate class loading problems. The downside to packaging everything into your deployable artifact is that your instance startup times inflate. A full complement of third party libraries in a huge ear file containing multiple war files could end up taking minutes to deploy because each artifact deploys its own libraries; and if there are common libraries throughout, each one can be loaded separately if they’re not organized to minimize this inefficiency.

The converse of removing all the third party libraries and sticking them into the instance library (jboss/server/<configured instance>/lib). The instance libraries load up orders of magnitude more quickly than the prepackaged third party library strategy, but the downside is your deployable artifact is no longer completely self contained. This might not necessarily be a bad thing at the end of the day however, as long as your third party libraries can be easily managed in whatever application server you use. JBoss allows you to use an instance specific library folder, and it turns out to be a neat alternative to self contained libraries, especially when there is more than one deployed application, and they share a few common libraries.

If you want to go a step more global than an instance specific common library, you could use the jboss/common/lib directory. This location will be loaded before the instance specific library and provides a baseline for all available instances. Any libraries that are super global should be placed here.

What the.. ?

So what happens if you have more than one library, say one in your war file, and another one in the instance lib, which one gets used by your code? It turns out that the order in which the classes are loaded matters and is the determining factor. The server global directory loads before the instance specific lib, and the instance specific lib loads before your deployable artifacts and their libraries. So basically, the more global libraries will outweigh the artifact local libraries.

Now what if you want your deployable artifact to override global libraries? Luckily, JBoss provides a way to scope the deployment. Scoping the deployment here means you’re making the libraries used by your deployment localized, superseding the global libraries with whats packaged in the artifact.

For War files you will need to add an entry to your jboss-web.xml file:

   <class-loading java2ClassLoadingCompliance="false">

and for Ear files you’ll need to make your jboss-app.xml look like this:


Where com.example is the package name for the specific class package (third party library) you want to override, and unique-archive-name.xxx is the name of the deployable artifact for which you want to localize the classes loading. Note that these descriptors only work for jboss and will not be honored by other vendors. Also worthy of mention is that for your deployable artifacts, the most global artifact’s deployment scope will be honored, so if you have an ear file, the jboss-app.xml in the ear file will override and cause any jboss-web.xml scoping configurations in any embedded war files to be ignored. java2ParentDelegation is supposed to be disabled by default, but its a good idea to explicitly set it to false anyway just to be on the safe side – enabling it to true will cause the classes referenced in this scope configuration to be loaded by the next most global scope (moving the loaded classes to the instance/lib if its in an ear or war, and to the most global jboss/common/lib if its in the instance specific lib).

It’s also a good practice to make sure your war files don’t start with the same first few letters as other deployed war files in the same instance. In JBoss 4.x it was possible to collide class loading when 2 or more war files started with the first few letters and the packaged class files in the WEB-INF directory shared a similar code base (ex: my_war.war vs my_warfile.war). The fix was to change the names of the war files so they were totally different. Whoever loaded first would be linked in the JBoss class loaders. If you run into a situation where old code keeps getting reloaded, keep this in mind.

JBoss wiki on Class Loading
Jboss Wiki on Class Loading use cases

Write a Stored Procedure in Postgres 8+

Stored Procs

Sometimes as a developer we’re tasked with data intensive work like importing data into a database, cleaning up sets of incomplete records or transferring data from one table to another through some kind of filter. While our application would normally be in charge of creating and maintaining the data, sometimes we don’t want to end up writing an entire module or mini application to address these tasks. Since they’re data intensive, a stored procedure might be a good approach to take. Stored procedures are a type of program written using a more robust version of sql (structured query language) that allows for the manipulation of data records directly within a database environment.

If we were to write the equivalent code using a layer written in java, .net, or php, there would be a lot of overhead cost in terms of processing power and performance – orders of magnitude more. As data is processed, results would normally be returned to that calling layer and shuffled around that layer’s memory, essentially adding another step to the process. If we make these changes as close to the data as possible, we’ll be able to squeeze as much performance as possible and suffer the least amount of overhead. Just for perspective here’s an example: a 1 gigabyte file could take several hours to import using java business logic, while a stored proc could take less than half an hour. Mileage may vary of course, but that’ll give you an idea of the performance cost you could save with data intensive tasks like that. A word of caution though: I’m not saying a stored proc is the way to go for your entire application; it’s merely a tool that can be used in your arsenal to get the job done with the most efficient means possible.


Here’s an example of a generic stored proc written in psql (postgres version).

CREATE OR REPLACE FUNCTION example_stored_proc() RETURNS void AS $$ 
     userRecord record; 
     user_property_id bigint;
     FOR userRecord IN  
          SELECT * FROM tb_user u ORDER BY u.user_id 
          SELECT INTO user_property_id nextval('sq_user_property'); 

          -- user_property_id now has a value we can insert here
          INSERT INTO tb_user_property VALUES(
                    user_property_id ,
          ) ; 
          IF userRecord.email like 'user@domain.com' THEN

                    update userRecord set email = 'user@other-domain.com' where id = userRecord.id;

          ELSEIF userRecord.email is null THEN

                    update userRecord set active = false where id = userRecord.id;


                    RAISE NOTICE 'didn't update any record';

          END IF;

          RAISE NOTICE 'added property for user id: %', userRecord.id; 
     END LOOP; 
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION example_stored_proc() RETURNS integer AS $$
CREATE OR REPLACE FUNCTION will create the stored proc in the database. RETURNS declares the data type returned at the end. This example returns an integer, but a record or a result set may also be returned. The text in between the two pairs of $$ is the body of the procedure.

This keyword initializes the variables the stored proc will be using. It essentially lets the database know to allocate memory for use.

This marks the beginning of the stored proc logic. It naturally ends with END.

FOR userRecord IN
SELECT * FROM tb_user u ORDER BY u.user_id

– – do stuff


This is the basic looping structure used in psql. Notice the loop is built around a straight forward sql query – here is where the magic happens. The looping variable in this example is “userRecord” – it holds the currently fetched data record and allows you to manipulate it for your own means in the body of the loop. So, if you wanted to insert the value of userRecord.id into a table, you could just stick in the insert statement as a variable as shown in the insert statement in particular loop’s body.


Using this construct allows you to create a temporary table to hold query results for later use. Your variable can be a record or a single column value. In order for it to work you need to declare the variable that’s going to take the value in the DECLRARE section of the stored proc. Inline variable declaration is not supported.


As expected, the IF/THEN/ELSEIF/ELSE/END IF construct can be used to create conditional sequences of logic. The conditionals need to be any kind of expression postgres can evaluate. The ELSEIF can be used to wrap secondary conditionals, while the ELSE of course is the default if no other conditions are met. Fairly self explanatory.


This is your standard psql logging output statement. The text in the single quotes is output to the console/message window, and every “%” is substituted with the ordered value after each comma in the statement. So, in this case “userRecord.id” is substituted into the first % to appear in the output text. If you wanted to have multiple values output you could construct your RAISE NOTICE like this:

RAISE NOTICE 'this is record % out of 1000, and its value is %', record_number, record_value; 

It would substitute record_number into the first % and record_value into the second % appearing in the text.

*nix commands I can’t do without

Unix/Linux/*nix survival 101

Let me start with the obvious: I’m definitely not a unix guru by any means. I do however use it on a daily basis for basic build/development oriented tasks, so I know enough to get by. Since my friend just installed his first ever linux distribution (CentOS, Huzzah!), I thought I’d write something up on some common unix commands that help me get through the day.

grep [command flags] [search text] [filename]

grep (global | regular expression | print) is the file text search command. Give it a regular expression and it will print out what it finds in the file indicated by filename. Here’s an example:

[root@bedrock some_jboss_folder]$ grep html readme.html
<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN”>
<meta content=”text/html” http-equiv=”content-type”>
<a href=”http://docs.jboss.org/html”>here</a>.</li>
<li><a href=”http://www.jboss.org/index.html?module=bb”>JBoss
Server  is licensed under the <a href=”lgpl.html”>LGPL</a>,

Some useful flags include -R (recurse into sub directories), -c (show just the total match count), -m NUM (return the NUM number of results), and -i (ignore upper/lower case).

ps aux | grep [search text]

This is a command you can use to get information about what processes the kernel is currently running. Adding the pipe after the ps command feeds the listing results to the grep search command. This is particularly useful when you want to look for a specific set of procs run by a user or script. Here’s an example:

[root@bedrock ~]$ ps aux | grep jboss
jboss 10910 0.0 0.1 4884 1176 ? S Feb04 0:00 /bin/sh /server/jboss/bin/run.sh -c services -b -Djava.net.preferIPv4Stack=true
jboss 10932 0.2 36.4 1089728 371456 ? Sl Feb04 31:43 java -Dprogram.name=run.sh -Xms128m -Xmx512m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.net.preferIPv4Stack=true -Djava.endorsed.dirs=/server/jboss/lib/endorsed -classpath /server/jboss/bin/run.jar org.jboss.Main -c services -b -Djava.net.preferIPv4Stack=true
500 20300 0.0 0.0 4200 700 pts/0 S+ 20:43 0:00 grep jboss

ps (process status) fetches a list of running pocsses. ax flags the command to return a listing of all procs. u flags to also list the user that the proc is running as. I use the grep to figure out if a jboss server is up and running, and sometimes to see what input parameters it used on startup – like what ip it bound to : “-b”. The results above lists first the user and process id, and then information about the proc.

netstat -ntalp | grep [search text]

This command must be run as root, but it lets you get a listing of network ports that are currently in use. This is particularly useful when trying to figure out port conflicts or to see if a particular server is listening on the correct port.

[root@bedrock ~]# netstat -ntalp | grep java
tcp 0 0* LISTEN 10932/java
tcp 0 0* LISTEN 10932/java
tcp 0 0* LISTEN 10932/java
tcp 0 0* LISTEN 10932/java
tcp 0 0* LISTEN 10932/java
tcp 0 0* LISTEN 10932/java

You can grep for port, ip/domain, status etc.

kill [signal flag] [process id]

This is the standard “kill process”, “terminate it dead” command. Usually when a proc refuses to shut down and all hell is breaking loose, and you can’t take no for an answer, signal flag “-9” will insta kill the proc. You can get the process id from the “ps aux | grep” command.

root@bedrock jboss]$ kill -9 10932

Here I took the process id from the jboss script that was running from the ps aux | grep command example listed above. Use ps aux to figure out which process id you want to terminate.

./run.sh [args]

This is the standard syntax for invoking a script, assuming you have run privileges. In windows you’d just type in the name of the script, but in unix you should prefix the script name with “./”.

As Dave Cheney explains in a comment:

    The reason you have to put “./” as a prefix to a script in your current working directory is the search path for executable programs does not (generally) include “.”
    To the shell, “.” expands to the current directory so ./run.sh is equivalent to /home/kevin/run.sh (for example). As you have provided a full absolute path, the shell will not have to try the prefixes available in your $PATH environment.

So essentially, by adding the “./” before the script name you feed the shell a fully qualified executable path to the script you want to run, that way it doesn’t have to guess where your script is. So if the script is named run.sh and your current working directory is in a folder named bin, you can invoke it like this:

root@bedrock bin]$ run.sh -c services -b -Djava.net.preferIPv4Stack=true

If your script takes parameters, you can pass them into the script after the script name.

tail [-f or -NUM] [path to file]

tail is a command that outputs the contents of a file to the terminal window. If you use the “-f” flag, it’ll continuously read the file as its contents grows. If you feed it a line count like “-1000” it will output the 1000 most recent line entries of the file. We’ll say something like – “Hey, I’m gonna tail the logs while the server starts up”. This means we’re monitoring the logs using this tail command. And knowing is half the battle.

Musk explains a better alternative that allow you to drop out of follow and search:

    You do not need tail use less +F or press Shift-F while in less and it will follow the currently choosen file if content is added.

    Example: log.txt

    less +F log.txt and you will have the same behaviour as when using tail -f log.txt except that you can use CTRL+C to drop out of follow mode and then use the search features available in less.

chown -R [group].[user]

This changes a file or directory’s owners to a new group/owner. The -R flag tell is to recurse the command into sub directories.

root@bedrock jboss]$ chown -R jboss.jboss

This command will work assuming there is a group and user named jboss, and it will change all files and folders in the current directory and lower to jboss.

chmod -R [permissions] [filename/expression]

This will set the permissions for the implied files to the new set of permissions listed. The mode can be indicated as either a string explanation of what each group can do or a 4 octal digit equivalent number.

[root@bedrock some_jboss_folder $ chmod ug=rwx,o=rw readme.html
[root@bedrock some_jboss_folder $ chmod 0775 readme.html

In the first example, we set the file owner (u) and group (g) to allow read (r), write (w) and execute (x). Then we set everyone else’s (o) permissions to read and write only, no execute. In the second example, we set it to 0775, which is the octal digit representation of the first command. 0777 will set read/write/exectue permissions allowed to everyone, its the same as ugo=rwx.

vi [filename]

Basic text editor *nix usually ships with. It will open up the indicated file in read mode, and if it doesn’t exist will let you create a new text file without saving to disk. To enter editor mode, hit the Insert key, you can then edit the file. After you make your edits, hit the Escape key to get into command line mode. If you want to save the file, enter “:w”. If you then want to quit, type in “:q”.

[root@bedrock some_jboss_folder ]$ vi readme.txt
<li>lib/ – the same
static library jars with a few jars, as most have moved to top level common/lib</li>
“readme.txtl” 718L, 36365C written

ping -c [ip/domain]

Pings an ip or domain with a packet of data. Unlike the windows cousin, you have to either pass in the number of times to ping (-c NUM) or hit control+c to stop pinging.

[root@bedrock some_jboss_folder ]$ ping -c 4 localhost
PING localhost.localdomain ( 56(84) bytes of data.
64 bytes from localhost.localdomain ( icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from localhost.localdomain ( icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from localhost.localdomain ( icmp_seq=3 ttl=64 time=0.033 ms
64 bytes from localhost.localdomain ( icmp_seq=4 ttl=64 time=0.048 ms

— localhost.localdomain ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.033/0.041/0.048/0.009 ms


This command takes over your terminal window and fills it with a listing of all the procs that are currently running, along with instruction crunching information. Hitting the < and > will scroll you through the results. q will quit top, returning you to the linux prompt. This is what it looks like:

top – 23:12:30 up 40 days, 16:38, 1 user, load average: 0.06, 0.02, 0.00
Tasks: 158 total, 1 running, 121 sleeping, 36 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1018232k total, 1003040k used, 15192k free, 138436k buffers
Swap: 2064376k total, 30236k used, 2034140k free, 312912k cached

2165 smmsp 20 0 9208 748 640 S 0.0 0.1 0:00.40 sendmail
1359 rpcuser 20 0 2988 560 556 S 0.0 0.1 0:00.03 rpc.statd
1346 rpc 20 0 2404 556 504 S 0.0 0.1 0:02.62 rpcbind
1 root 20 0 2012 624 560 S 0.0 0.1 0:04.71 init

man [command name]

If you need more detail on a specific command, you can get help from the unix manual by invoking man:

[root@bedrock ~]# man top
TOP(1) Linux Userâs Manual TOP(1)

top – display Linux tasks

top -hv | -bcHisS -d delay -n iterations -p pid [, pid …]

The traditional switches â-â and whitespace are optional.

The top program provides a dynamic real-time view of a running system. It can display system summary informa-
tion as well as a list of tasks currently being managed by the Linux kernel. The types of system summary
information shown and the types, order and size of information displayed for tasks are all user configurable
and that configuration can be made persistent across restarts.

ls [list flag] [path to directory]

This prints out a listing of the indicted directory’s contents, or the current directory if no path is supplied. -l lists one file/directory per line of output, and -a lists everything including files that start with a period.

[root@bedrock ~]# ls -la
total 168
drwxr-x—. 10 root root 4096 2009-11-20 23:09 .
drwxr-xr-x. 30 root root 4096 2010-02-07 22:32 ..
-rw——-. 1 root root 1675 2009-11-11 18:55 anaconda-ks.cfg
-rw——-. 1 root root 21354 2010-02-10 03:39 .bash_history
-rw-r–r–. 1 root root 18 2009-03-30 07:51 .bash_logout
-rw-r–r–. 1 root root 176 2009-03-30 07:51 .bash_profile
-rw-r–r–. 1 root root 176 2004-09-22 23:59 .bashrc
drwx——. 3 root root 4096 2009-11-12 02:17 .config
-rw-r–r–. 1 root root 100 2004-09-22 23:59 .cshrc
drwx——. 3 root root 4096 2009-11-11 19:06 .dbus

cat [filename1] [filename2] > [outputfile]

cat lets you concatenate and output the contents of a file of multiple files to the terminal window, or write it to a file if you include the “>” operator. Thanks Kevin. Here’s an example:


[root@bedrock ~]# vi test.txt
concatenate me!
this is a test


[root@bedrock ~]# vi concatenate.txt
a file that needs to be concatenated



[root@bedrock ~]# cat test.txt concatenate.txt > output.txt

The result

[root@bedrock ~]# more output.txt
concatenate me!
this is a test
a file that needs to be concatenated


sed -i ’s/[some_text/other_text]/’ [filename]/

sed – stream editor for filtering and transforming text (blatantly stolen from “man sed”‘s documentation). This command will replace “some_text” with “other_text” in the file indicated. One occurrence per line is replaced. Thanks for this one Silvery.

Consider the file “test.txt”:

[root@bedrock jboss]# more test.txt
this is a file
this ia another file
lets faceroll files

And this is what happens when we run sed on it:

[root@bedrock jboss]# sed -i ‘s/file/folder/’ test.txt
[root@bedrock jboss]# more test.txt
this is a folder
this ia another folder
lets faceroll folders
[root@bedrock jboss]#


more/less – enables you to view the contents of a file within a page on the screen. Once you are browsing the contents, you can hit “s” or “f” to scroll multiple lines of text. “v” will fire up an editor at the current line you’re working on. If you have a large list of files and want to check them one page at a time, you could try “ls | less”. Thanks again Kevin.


Clears the visible screen of text, starting your prompt at the top of the window.

mkdir [directory name]

This command simply creates a directory with default permissions and ownership.

cp -R [source] [destination directory]

Copies a file/folder from one location into another. -R flags to copy recursively.

mv [source] [destination directory]

Renames a source directory or folder to a new location/name.

rm -Rf [folder/file]

Deletes a file. -R flags to delete recursively. When invoked on a directory it would normally go line by line asking you if you want to delete such-and-such file, use the “f” flag to force delete and skip the file by file questions.

cd [path to new directory]

cd changes the current directory to the path indicated. A “..” means to move up one directory. If the path begins with “/” it means start from the disk root folder. Anything else implies a relative path to the new folder.

– -color=auto

In his comment, Ryan Fox points out:

    The `- -color=auto` option adds colour to the output of some commands, like ls or grep. In ls, the colours change depending on the file type, permissions, etc. In grep, it will highlight the text that matched your regex.

Here’s an example:

[root@bedrock jboss]# ps aux | grep jboss – -color=auto
root 5215 0.0 0.0 4200 712 pts/1 S+ 05:56 0:00 grep jboss –color=auto
jboss 10910 0.0 0.1 4884 1176 ? S Feb04 0:00 /bin/sh /server/jboss/bin/run.sh -c services -b

Open ended

I’m sure there must be other useful commands I have missed. If anyone has any other suggestions to add/edit these entries, please feel free to comment and I’ll update accordingly.

Transforming XML into MS Excel XML

MS Excel understands XML?

If you need to export xml to a Microsoft Excel friendly format, you could stress over the HSSF (Horrible Spread Sheet Format, for the uninitiated) format with apache’s POI framework or you could transform your xml into an format Excel understands. This approach will allow you to decorate your cells with stylized fonts and borders; what it will not allow you to do is create or add complex objects like charts, graphs or pictures. This xml format is a watered down version of excel. If you require the ability to embed images, graphs and complex objects, have a look at Apache’s framework.

Alright, Show me some code

Let’s take a look at the xml we’re going to be using:

<Report caption="Reporting">
	<block 	caption="Staff Memeber Report" 
		userIdLabel="User Id" 
		accountNameLabel="Account Name"
		createDateLabel="Date Created"

		<staffMember id="00000" 
		<staffMember id="00001"
		<staffMember id="00002"


Pretty Straight forward xml, optimized for shorter xpath expressions.

The Magic XSL

<?xml version="1.0" encoding="ISO-8859-1"?>
<?mso-application progid="Excel.Sheet"?>
<xsl:stylesheet version="1.0" 

	<xsl:template match="/">

				<Style ss:ID="Default" ss:Name="Normal">
					<Alignment ss:Vertical="Bottom" />
					<Borders />
					<Font />
					<Interior />
					<NumberFormat />
					<Protection />
				<Style ss:ID="s21">
					<Font ss:Size="22" ss:Bold="1" />
				<Style ss:ID="s22">
					<Font ss:Size="14" ss:Bold="1" />
				<Style ss:ID="s23">
					<Font ss:Size="12" ss:Bold="1" />
				<Style ss:ID="s24">
					<Font ss:Size="10" ss:Bold="1" />

			<Worksheet ss:Name="{//Report/@caption}">
					<Column ss:AutoFitWidth="0" ss:Width="85" />
					<Column ss:AutoFitWidth="0" ss:Width="115" />
					<Column ss:AutoFitWidth="0" ss:Width="115" />
					<Column ss:AutoFitWidth="0" ss:Width="160" />
					<Column ss:AutoFitWidth="0" ss:Width="115" />
					<Column ss:AutoFitWidth="0" ss:Width="85" />
					<Column ss:AutoFitWidth="0" ss:Width="85" />
					<Column ss:AutoFitWidth="0" ss:Width="160" />

					<Row ss:AutoFitHeight="0" ss:Height="27.75">
						<Cell ss:StyleID="s21">
							<Data ss:Type="String">Example Spreadsheet</Data>
					<Row ss:AutoFitHeight="0" ss:Height="18">
						<Cell ss:StyleID="s22">
							<Data ss:Type="String">
								<xsl:value-of select="//Report/@caption" />
							<Data ss:Type="String">

					<xsl:call-template name="staffReport" />




	<xsl:template name="staffReport">

		<Row ss:AutoFitHeight="0" ss:Height="18">
			<Cell ss:StyleID="s23">
				<Data ss:Type="String">
					<xsl:value-of select="//Report/block/@caption" />
			<Cell ss:StyleID="s24">
				<Data ss:Type="String">
					<xsl:value-of select="//Report/block/@userIdLabel" />
			<Cell ss:StyleID="s24">
				<Data ss:Type="String">
					<xsl:value-of select="//Report/block/@accountNameLabel" />
			<Cell ss:StyleID="s24">
				<Data ss:Type="String">
					<xsl:value-of select="//Report/block/@createDateLabel" />
			<Cell ss:StyleID="s24">
				<Data ss:Type="String">
					<xsl:value-of select="//Report/block/@emailLabel" />

		<xsl:for-each select="//Report/block/staffMember">

					<Data ss:Type="String">
						<xsl:value-of select="@id" />
					<Data ss:Type="String">
						<xsl:value-of select="@accountName" />
					<Data ss:Type="String">
						<xsl:value-of select="@createDate" />
					<Data ss:Type="String">
						<xsl:value-of select="@accountEmail" />



The overall XSL structure is pretty much the same as any other XSL. I broke up the report into two main components: the generic, enclosing, Workbook xsl, and the main staffMember xsl template. The enclosing Workbook xsl has the report metadata and sets up the overall layout while the staffMember template loops through the staffMember xml nodes, outputting one row of data per node.

Styled Text

Let’s take a look at the styles mechanism:

	<Style ss:ID="Default" ss:Name="Normal">
		<Alignment ss:Vertical="Bottom" />
		<Borders />
		<Font />
		<Interior />
		<NumberFormat />
		<Protection />
	<Style ss:ID="s21">
		<Font ss:Size="22" ss:Bold="1" />

Notice there is a “Defualt” style, which offers a venue to lay out default styles for all your cells. Then you have unique style definitions like ss:ID=”s21″ which define a font size and weight:

<Font ss:Size="22" ss:Bold="1" />

Size is measured in Points, so take that into account as you determine the size you would like to use. The Bold=”1″ flags the style to render as Bold weight, as oppose to regular, non bold which would be Bold=”0″. If you wanted to change the font you could add ss:FontName=”Tahoma”. A particular style is linked to a cell by adding the style ID as a cell attribute like this:

<Cell ss:StyleID="s22">
	<Data ss:Type="String">some stylized text</Data>

where the ss:StyleID matches the style definition’s ss:ID.

Sizing Columns

Note that you can add multiple Worksheets – all you need to do is add more Worksheet XML nodes, and stick data in them. You can initialize the starting column widths by using the Column nodes under the Table node:

<Column ss:AutoFitWidth="0" ss:Width="85" />
<Column ss:AutoFitWidth="0" ss:Width="115" />

If AutoFitWidth is set to true, it will auto size the columns to whatever appropriate width the numeric or date values consume. Text is not automagically resized. When it’s flagged to 0, and a Width is specified, it will resize to whatever Width is set to. When set to true (1), and a Width is present it will set the width to the specified value, and auto size if the cell data is larger than the Width.

Simple Formulas

You can also embed Excel formulas as part of the XSL so your spreadsheet can come pre-wired with formulas. I didnt include any in this example but I’ll go over an example snippet of code:

<Cell ss:Index="2" ss:Formula="=SUM(R[-3]C,R[-2]C,R[-1]C)">
	<Data ss:Type="Number"></Data>

ss:Formula=”=SUM(R[-3]C,R[-2]C,R[-1]C)” might look a little strange, since you’re probably used to the =SUM(A12,A13,A14) type of notation used from the nomal gui. The XML notation is merely a mechanism for locating which cells to add up in this particular sum. R corresponds to the relative row, and C corresponds to the relative column. So, R[-3] means the row 3 spaces above the current cell, and C means the current cell (since there is no “[x]” notation). If we wanted to include the cell 2 rows down, and 4 columns to the left we could express that as R[2]C[-4]. Simple x/y coordinates. For more on formulas, have a closer look at Microsoft’s ss:Cell documentation.

The Rendered Spreadsheet

That’s pretty much all there is to it. The xml isn’t perfect, but its definitely more presentable than regular csv files without getting in the way for anyone that needs to work with the actual data. Here’s a screen shot for the aetheists:

xml rendered for excel

XML rendered as MS Excel output via xslt

Source Files
rendered.xml (change extension to .xml, and open with MS Excel)

Microsoft overview on Excel XML structure
Microsoft XML Node reference
Wikipedia Article on Office XML formats. Yep Word also has an XML format.


When looking at the MS Excel documentation be aware that they didn’t declare:


but instead


So their Workbook xsl has ss: preceding every node, when compared to my workbook xsl.

Integrating Spring MVC 3.0 with JSR 303 (aka javax.validation.*)

Annotated POJO validation comes to a JDK near you!

The new annotated validation spec (jsr 303) is pretty slick, especially when used along side Spring MVC 3.0, and when backed by ejb3 entities. I’m pretty impressed with how easily it integrates with Spring MVC’s framework, and with how seamlessly error messages are passed to the form taglibs so they show up in your web forms.

I know some of you might argue that the current validation framework might not address complex validations, but after giving Hibernate’s reference implementation documentation a look, it seems interdependent validations are at least possible through embedded @Valid in complex objects. Even if you have to come up with your own really weird validation for a particular field, jsr 303/hibernate offers a way to create your own custom annotation driven validations. For the remaining 95% of all the other web forms, you’re probably going to be alright if you use the pre-defined validations offered by jsr 303.

Getting started

Download the jsr 303 reference implementation jars from SourceForge, via Hibernate’s download page. You’ll need to add the main, Hibernate validator jar (currently hibernate-validator-4.0.2.GA.jar as of 2/8/2009) and the included jars in the release jar’s lib directory to your application’s classpath if they’re not already there (if you’re on jboss 5.1, probably at least validation-api-1.0.0.GA.jar, maybe more). The Hibernate reference implementation release also includes the jar files required to run in a jdk 5 runtime, include those if you’re not running on jdk 6. Download Spring MVC from Spring’s download page, its part of the Spring 3.0 release. Spring MVC requires the following jars in your classpath:

    • org.springframework.asm
    • org.springframework.beans
    • org.springframework.context
    • org.springframework.context-support
    • org.springframework.core
    • org.springframework.expression
    • org.springframework.web
    • org.springframework.web.servlet

Wiring Spring MVC

You’ll need to make sure you map Spring MVC correctly. Consider the following in web.xml:

	<!-- Spring Action Servlet -->

And then in spring-servlet.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	<!-- Scans the class path of this application for @Components to deploy -->
	<context:component-scan base-package="com.faceroller.web" />

	<bean id="multipartResolver" 

	<!-- flags annotations for processing -->
	<mvc:annotation-driven />

	<!-- forward "/" requests -->
	<mvc:view-controller path="/" view-name="welcome"/>

	<!-- Configures Handler Interceptors -->	
	<!-- Changes the locale when a 'locale' request parameter is sent -->
	<bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor"/>

	<!-- Saves a locale change using a cookie -->
	<bean class="org.springframework.web.servlet.i18n.CookieLocaleResolver" 
		id="localeResolver" />

	<!-- hey man, I like my jsp files in "/". WEB-INF just seems.. ugly -->
	  <property name="viewClass" 
	  <property name="prefix" value="/"/>
		<property name="basename" value="messages" />

Now that we’ve squared away the setup, on with the examples:

Example ejb3 jsr-303 validation equipped entity bean

import javax.persistence.*;
import javax.validation.Valid;

public class InventoryItem implements Serializable {

	private static final long serialVersionUID = 1L;
	protected int id;
	protected BigDecimal price = new BigDecimal("0.00");	
	@NotEmpty(message = "Name is a required field")
	protected String name;
	protected int minimumPrice;

	private String mail;

	private String lowerCaseName;
	private Date futureDate;

	private boolean mustBeTrue;

	protected List<InventoryImage> images = new ArrayList<InventoryImage>();
	// setters/getters here...

Lets examine the annotations:


@NotNull flags the annotated field as valid only if it has a value assigned. A String with a null value will fail, while a String with a “” value will pass. Since the “message” parameter is not defined, the error message will default to whatever the validation package ships with. In this case, I think the message will read “name is a required field”.

@NotEmpty(message = “Name is a required field”)

@NotEmpty flags the annotated field as valid if the field is both not null and not a value of “”. Since the “message” parameter is defined for this annotation, the param value will replace the default message passed into spring mvc’s error framework.


@Min flags the field as valid only if it has a value equal to or higher than the value in the parens. The contrary to this is @Max, which will flag as valid values lower than the value in the parens.


@Email flags the field as valid only if the field is a valid email.


@Pattern will flag the field as valid only if the string matches the regular expression passed in as the parameter. In this case it will only pass if the string is made up only of lowercase letters.


@Future will flag as valid only if the date annotated is in the future. The contrary to this is @Past, which would be valid only if the date has already passed.


@AssertTrue will flag as valid if the annotated boolean resolves to true. The contrary to this is @AssertFalse, which will flag as valid only if the boolean resolves to false.


@Valid will validate only if the complex object annotated validates as true. Lets say in this case that InventoryImage has two validation annotated fields; if any InventoryImage fails either of those two fields, then the enclosing InventoryItem will fail validation because of the @Valid annotation. This is how compelx cross object validations are supported, other than defining your own.

Now that we’ve annotated our bean, we’ll need to hook it into a Spring MVC controller.

The Spring MVC Controller

package com.faceroller.web;

public class InventoryController {

	private static Log log = LogFactory.getLog(InventoryController.class);
	 * initialize the form
	public ModelAndView addInventorySetup(InventoryItem item){
		log.info("setup add inventory");
		return new ModelAndView("/inventory/add.jsp", "item", item);

	 * process the form
	@RequestMapping(value="/add/meta.go", method=RequestMethod.POST)
	public String processInventoryMeta(
			@ModelAttribute("item") @Valid InventoryItem item, 
			BindingResult result) {
		log.info("process add inventory");
		if (result.hasErrors()) {
			return "/inventory/add.jsp";

		InventoryService service = ServiceLocator.getInventoryService();
		return "redirect:/add/images/"+item.getId();
	 * forward to whatever page you want
	public ModelAndView getInventoryItem(@PathVariable int itemId){

		log.info("getting item");

		InventoryService service = ServiceLocator.getInventoryService();
		InventoryItem item = service.getInventoryItemById(itemId);

		return new ModelAndView("/inventory/browse.item.jsp", "item", item);

Pay special attention to

	@RequestMapping(value = "/add/meta.go", method=RequestMethod.POST)
	public String processInventoryMeta(
			@ModelAttribute("item") @Valid InventoryItem item, 
			BindingResult result) 

You’ll notice @Valid marked up right before the InventoryItem item bean parameter. This is the annotation that does all the validation magic for us. There is no need to implement a custom validator factory, as spring mvc’s framework would normally require. If the bean fails validation, BindingResult result will be prepopulated with all corresponding JSR 303 validation errors. The catch is you have to add the @ModelAttribute(“item”) annotation to the signature, otherwise the form bean in the jsp will not have access to all the error messages passed along by the validations.

The jsp code

<form:form method="post" commandName="item" action="/process/form">
<table width="100%" border="0">
	<tr><td colspan="3" class="bottomPadding">
		<span class="secionHeader">Add item to inventory</span>
	<tr><td class="labelColumn" width="100">
	</td><td width="100">
		<form:input path="price"/>
		<form:errors path="price" cssClass="error"/>
	<tr><td class="labelColumn">
		<form:input path="name"/>
		<form:errors path="name" cssClass="error"/>

This is just a simple form, nothing new here, but I’m including for completeness. The Spring MVC framework will correctly populate the form tags with any bean errors should the form fail validation. The form tags are part of the standard spring taglibs, found in the org.springframework.web.servlet.* jar included in the Spring 3.0 distribution.

Stuart Gunter pointed out in a comment to this post that there is a workaround for injecting your own messages using spring’s autowiring. Click the jump for his example.

Hibernate 4.x Validation reference implementation
Spring MVC 3.0 documentation

Quartz Scheduled Jobs – v1.5.2

Java, XML, and cron driven scheduling made easy.

Projects here and there often need some kind of mechanism to schedule jobs at odd hours, or intervals. Quartz is a robust, flexible tool you can use to accomplish simple to complex job scheduling. There are a number of ways to use/configure quartz, but I’ve grown accustomed to using it with an xml based configuration. There are a few things we need to set up unfortunately, so there is a certain amount of plumbing we need to work out, but once that infrastructure is set up, its much less work to set up additional jobs.


Originally, I went on about writing a custom quartz servlet to initialize the engine, but theres an even easier way to set this up as Sibz has pointed out in a comment:

		Quartz Initializer Servlet

This xml snippet was blatanlty hijacked from quartz’ documentation page. As you might have guessed, this xml configuration goes in your web.xml. No need to write your own initializer servlet, just plug and play.

We’ll need to add 2 property files. The one that fine tunes the engine in our example is quartz.properties..


If you noticed in the web.xml (the init param named “config-file” is set to the path <param-value>/some/path/my_quartz.properties</param-value>), we load up a properties file that configures the quartz engine.

org.quartz.plugin.jobInitializer.fileNames = quartz-config.xml
org.quartz.plugin.jobInitializer.overWriteExistingJobs = true
org.quartz.plugin.jobInitializer.failOnFileNotFound =true

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool 
org.quartz.threadPool.threadCount = 5 
org.quartz.threadPool.threadPriority = 5

The first half of the settings is pretty straight forward, but the second half is all about tuning. Setting a class other than SimpleThreadPool means you’ve written your own implementation for quartz thread management. You probably really know what you’re doing and you can stop reading. Threadcount controls the number of quartz threads dedicated to the engine. One is plenty for a job that fires off once or a few times a day. If you plan on running thousands a day, with heavy loads you’ll want something like 50 threads and up towards about 100. Threadpriority 5 means normal priority, while priority 1 is highest priority. For the most part, 5 is plenty, if you have cpu intensive processing going on, you can tune this to make sure your jobs fire off when they’re supposed

The second file we need to set up is the xml that configures your quartz job…


            <description>schedule a nightly job</description>
                <cron-expression>0 0/5 * * * ?</cron-expression>

This file is made up of two main sections. Job-Detail configures the job’s metadata, while trigger defines the configuration and cron expression that fires off the job. Stuff like the name, and mappings needed to configure the matching trigger, or the xml parser will complain. Parameters can be added in job-data-map and passed into the job-class for processing. Which brings us to the last item of business: THE JOB IMPLEMENTATION CLASS!!!


Scheduler is the job implementing class that defines the unit of work preformed by the quartz job. JobExecutionContext contains all the job metadata defined in the configuring xml, and the data map is the object that contains all the name/value pairs listed in the xml we just wrote up. Here’s the full class:

package com.examples.quartz;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.quartz.Job;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;

public class Scheduler implements Job {
	protected static final Log log = LogFactory.getLog(Scheduler.class);

	public void execute(JobExecutionContext jobContext) 
		throws JobExecutionException {

		log.info("entering the quartz config");

		JobDataMap map = jobContext.getJobDetail().getJobDataMap();
		String username = (String) map.get("username");
		String password = (String) map.get("password");

		log.info("mapped data: " + username + "/" + password);


.. And that’s all there is to setting up a quartz jobs. If we want to add additional quartz jobs, all we would need to do is add another job node in our quartz-config.xml and write another job interface implementing class. The rest pretty much stays the same, since all the heavy lifting has been done.

Ejb3 Basics: Deploying Message Driven Beans

Farewell to lazy auto queue generation in JBoss 5

MDB’s were never so easy to deploy and manage when ejb3 first came out. In Jboss 4, all you have to do was annotate a class with @MessageDriven, sprinkle some meta data here and there, stick it in the oven and wham! Instant “I cant believe I made an MDB!?!” In Jboss AS 5 however, MDB queues are no longer automatically created for your application anymore on boot. An inspection of the MDB llifecycle illustrates why:

  1. MDB deploys
  2. No existing Topic/Queue
  3. Topic/Queue is automatically created
  4. MDB is undeployed
  5. There’s no callback/hook to remove the created Topic/Queue. And if there was, should undeploying the MDB even be allowed to trigger this action?

blatantly stolen from JBAS-5114, 5th comment down – thanks Andy, and DeCoste by proxy

SO to reiterate… whereas JBoss AS 4.0 would have auto-created and MDB queues for you on boot, in 5.0 this no longer holds true. Consider the following MDB:

package com.examples.mdb;

import javax.ejb.*;
import javax.jms.*;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

@MessageDriven(name = "MyQueue", activationConfig = {
        		propertyName = "destinationType", 
        		propertyValue = "javax.jms.Queue"),
        		propertyName = "destination", 
        		propertyValue = "queue/MyQueue"),
public class MyQueue implements MessageListener {
	private static final Log log = LogFactory.getLog(MyQueue.class);
	public void onMessage (Message msg) {
		try {
			log.debug("Processing MyQueue queue...");
			ObjectMessage oMsg = (ObjectMessage) msg;
			SomeObject result = (SomeObject ) oMsg.getObject();

			 * do stuff with the object

		catch (Exception e) {

In jboss 4 you could leave your MDB class like this, and the app server would automatically handle everything for you. If you plan on using jboss 5.+ however, you will have to choose one of the following..

Wire it yourself in destinations-service.xml

In /deploy/messaging/destinations-service.xml, you can add the MDB destination yourself, letting jboss know to create your queue on boot. Here’s an example configuration:

<?xml version="1.0" encoding="UTF-8"?>
	Messaging Destinations deployment descriptor.

		<depends optional-attribute-name="ServerPeer">


The only thing this configuration needs to change is the queue name – make sure it matches the name of the queue annotated in your MDB class. This by itself is the closest you can get to being lazy. You will need to make sure however that you add one destination for each of the MDB queues your application uses. Option two requires a little bit more work but you don’t have to muck around with the jboss environment…

Add deployment descriptors to auto create the queue via jboss.xml

You can instead deploy the optional jboss.xml file in your ejb jar file’s META-INF folder (in addition to your persistence.xml file if you’re using entities). Your ejb jar structure should then look like this:

	- / ejb classes and cool stuff here
		- persistence.xml
		- jboss.xml

And this is what jboss.xml would look like:

<?xml version="1.0" encoding="UTF-8"?>
<jboss xmlns="http://www.jboss.com/xml/ns/javaee" 

The key command in this file here being: <create-destination>true</create-destination>. This will flag jboss to create the queue for you if it doesn’t already exist. This approach would probably be better suited for jboss only deployments since the flag to auto create the queue is configured within a jboss exclusive deployment descriptor – jboss.xml.

Once either of these has been implemented, your MDB should be deployed, initialized and ready to fire up. Oh, and fist pumps to ALR for pointing me in the right direction – cheers buddy!

Ejb3 basics: Entities

Entity Beans? Better than 2.1, I promise.

Ejb3 Entity beans are a type of enterprise java bean construct used to model data used by the ejb framework. The basic idea is to manipulate simple java objects, which represent in concrete terms your database data, and then have the framework handle as much of the plumbing as possible when you persist the data. Persisting means to store for later use in some data repository – usually some kind of database. By persisting these entities within the ejb framework we are able to abstract out tasks like updating a table and its associated foreign key table elements, perform queries and caching that automatically handles stuff like pre-populating java objects, and lots of the other boring stuff. In short, using entities in your application will allow you to work more on implementing business logic and less on wiring and mapping DAOs to TransferObjects. For the sake of completeness, the other two other important types of ejb beans should be mentioned: the Session and Message driven beans. In case it wasn’t obvious, ejb3 is only possible with java 1.5+ since that’s the release that initially introduced annotations into the java language.

One of the great things about ejb3 is that entities and persistence got a major overhaul from 2.1 = MAJOR SEXY TIME. Ejb3 does a really good job of simplifying the object model by using annotations in pojos to mark up entities. You can now model your entire data structure in terms of plain old java objects and their relationships, and the persistence engine will go and create all the necessary tables and sequencers and supporting schema elements.

Example Entity

Here’s an example of a bidirectional one to many relationship between User and Contact. Consider the following class:

package com.examples.entities;  

import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;

import javax.persistence.*;

@SequenceGenerator(name = "sq_user",sequenceName = "sq_user", initialValue=1)
public class User implements Serializable {

	private static final long serialVersionUID = 1L;
	@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="sq_user")
	protected Long id;

	@Column(name="user_name", nullable=false, length=32)
	protected String username;
	protected String password;
	protected String email;	
	protected List<Contact> contacts = new ArrayList<Contact>();
	public Long getId() {
		return id;
	public void setId(Long id) {
		this.id = id;
	public String getUsername() {
		return username;
	public void setUsername(String username) {
		this.username = username;
	public String getPassword() {
		return password;
	public void setPassword(String password) {
		this.password = password;
	public String getEmail() {
		return email;
	public void setEmail(String email) {
		this.email = email;
	public List<Contact> getContacts() {
		return contacts;
	public void setContacts(List<Contact> contacts) {
		this.contacts = contacts;


This is a fairly common type of entity. Going from top to bottom, lets take a look at the annotations used and examine what they do.


@Entity is the annotation that marks this particular java class as an ejb entity. This tells the persistence engine to load up this class and its associated annotations and use it as a model for data in the database. Technically this is the only annotation required in the class for a very simple entity, but there are other annotations we can use to customize and declare more complex relationships.


@Table lets you name the table modeled by your pojo. Its just a simple way to keep things organized in the database. If you don’t specify the table name it will default to the class name.

@SequenceGenerator(name = “sq_user”,sequenceName = “sq_user”, initialValue=1)

@Sequence lets you set up the sequence used for the primary key generation. This is required when you choose GeneratorType.SEQUENCE as your primary key generator. The name must match the @GeneratedValue’s name value. This is how the persistence engine knows how to map the sequence to the column.


@Id indicates that the following class method or field will map the table’s primary key.

@GeneratedValue(strategy=GenerationType.SEQUENCE, generator = “sq_user”)

@GeneratedValue maps the type of primary key incrementing strategy to use when adding new records to the database. Here are the possible strategies:

  • GenerationType.AUTO
    This indicates that the persistence engine will decide what incrementing strategy to use. Lazy man multiple vendor option.
  • GenerationType.IDENTITY
    This indicates that the persistence engine should use the identity column for incrementing. Vendors that can use this a ones that set “AUTO-INCREMENT” value type of flag to true. MySQL is one example of a vendor that can use this type.
  • GenerationType.SEQUENCE
    This tells the persistence engine to use a sequence to manage the increment values when inserting new values into the table. Postgres is an example of a vendor that uses sequences.
  • GenerationType.TABLE
    This tells the persistence engine to use a separate table to track increments on the primary key. This is more of a general strategy than a vendor specific implementation.

@Column(name=”user_name”, nullable=false, length=32)

@Column allows you to define column attributes for each class field. You can choose to define all of the possible relevant attributes or just the ones that you want to define. Other possible attributes are:

  • columnDefinition=”varchar(512) not null”
    Allows you to define native sql to your column definition
  • updatable=false
    Sets the column to allow updates or not. If it is not explicitly set to false, it will default to true, allowing updates to happen.
  • precision=10
    Decimal precision
  • scale=5
    Decimal scale
  • unique=true
    Defines if the column should contain only unique values.
  • table=”tb_user”
    Maps the table name for which this column belongs.


@OneToMany lets the persistence engine know that this field or method has a one to many type of relationship with the mapped object and the mappedBy attribute lets the persistence engine know the foreign key used when mapping the relationship. It will then set up any necessary relationship tables needed to express the relationship. This would normally include creating a separate table to hold all the key mappings.


@JoinTable lets you define the join table’s properties. In this case we’re using it to name the join table mapping the one to many relationship. A more complete @JoinTable annotation looks like this:

	public List<Contact> getContacts() {
		return contacts;

This covers the owning class, here’s the class being pwnt:

import javax.persistence.*;

public class Contact {

	protected Long id;
	protected String email;	

	protected User user;
	public Long getId() {
		return id;
	public void setId(Long id) {
		this.id = id;
	public String getEmail() {
		return email;
	public void setEmail(String email) {
		this.email = email;
	public User getUser() {
		return user;
	public void setUser(User user) {
		this.user = user;


@ManyToOne annotation implies the connecting foreign key used in the bidirectional mapping. When the persistence engine reads all the entities in and starts generating all the sql to model the object model it will generate three tables between these two java classes. One table “tb_user” will represent the user class, “tb_contact” will represent the contact class, and finally, “tb_user_contact” which represents the relationship mapping table. This annotation is what turns a unidirecitonal relationship into a bidirectional relationship. Here’s an example:

	public User getUser() {
		return user;


@ManyToMany describes the many to many association between entities. It is used in conjunction with the @JoinTable annotation to define the mapping table used for storing all the relationships. Here’s an example:

	public List<Contact> getContacts() {
		return contacts;

and then in the Contact class we would have:

	public User getUser() {  
 		return user;  

The owning entity will always have the @JoinTable, and the owned entity will always have the @ManyToMany(mappedBy=?) annotation.

These are just a few things that can be done with ejb3. I would suggest sifting through the java 5 javadocs to get a better feel for the other possible annotations.

For more reading:
Javax Persistence API
Java 5 Persistence Tutorial
Official Java Persistence FAQ