« Posts under Java

Manually override and launch quartz jobs…

Override quartz settings?

So you have a quartz job that’s chugging along nicely until you’re hit with the reality that the job details parameters change or the job needs to be suspended, or something happens that you end up having to recompile and redeploy your application just to update the packaged quartz job properties. This is no fun. You will undoubtedly have to take the updated code through the regular qa cycle, regress test, and then ultimately redeploy your code into the production environment. Surely there must be some way to address this problem when using Jboss…

One way I came up with was to divorce the job execution code from the job invocation, while making sure that the JobDetailsMap always checked an external resource before defaulting to loading the packaged resource within the deployed artifact. To allow for manual invocation, I also added a servlet that basically just wrapped the decoupled job invocation code in order to launch the quartz job. I also added a property to the JobDetailMap – “enable” which I used as a flag for whether the job should fire or not. Because it would try to load an external resource before defaulting, we were then able to have complete control over the quartz job’s properties. Note that you can’t change the cron fire date by using this method – the job itself is loaded in from the minute your application fires up – to reload the job you’d have to programatically find the existing job, destroy it and then create a new one based off the external properties. In my particular case we didn’t need to go that far but that option is available for those that need it.

The steps:

1) stick a copy of the quartz-config.xml file in the jboss.conf.dir location: maybe something like “/jboss/server/myInstance/conf/quartz-config.xml”. This conf directory is explored in depth in the related post Jboss System Properties.

2) Rig your quartz Job class so the execute(JobExecutionContext jobContext) method simply calls a plain launchJob() method. By doing this you end up separating the call that launches the job from the quartz specific entry point so any externally invoking code can call your launchJob() method directly without you having to figure out how to populate a JobExecutionContext object to pass into that execute() method:

public class QuartzJob implements Job {   
   
    public void execute(JobExecutionContext jobContext)   
        throws JobExecutionException {   
   
         log.info("launching regularly scheduled quartz job");   
        launch(); 
    }   
 
    public void launch() { 

         // your job code would go here

    }

}

3) Read in that quartz-config.xml file from the Jboss conf directory if one exists, and extract the properties from the xml file to populate your own JobDetailsMap object. Default it to read in the quartz-config.xml packaged in your war, jar or ear file:

public void launch() { 

  Document document = null; 
  SAXReader reader = new SAXReader(); 
  JobDataMap map = new JobDataMap(); 

  try { 

       // this section here extracts properties from the config file	    
       InputStream is = null; 
       String quartzConfig = "quartz-config.xml"; 


       try { 

	    String path = System.getProperty("jboss.server.config.url")
		 +quartzConfig;   
	    URL url = new URL(path); 

	    log.info("attempting to load " + quartzConfig + " file from: " + path); 
	    is = url.openStream(); 
	    log.info("loaded " + quartzConfig + " from URL: " + path);   

       } catch (Exception e) { 

	    is = this.getClass().getResourceAsStream("/" + quartzConfig); 
	    log.info("couldn't load " + quartzConfig + 
		 " from URL, loaded packaged from war: /"+quartzConfig); 

       } 

       document = reader.read(is); 

       String xPath =
            "/quartz/job/job-detail[name = 'myQuartzJob']/job-data-map/entry"; 
       List<Node> nodes = document.selectNodes(xPath); 
       for (Node node : nodes) { 
	    String key = ((Node) node.selectNodes("key").get(0)).getText(); 
	    String value = ((Node) node.selectNodes("value").get(0)).getText(); 

	    map.put(key, value); 
       }

    } catch (Exception e) { 
         e.printStackTrace(); 
    } 

    String enabled = map.getString("enabled");
    if(enabled  != null && enabled .equalsIgnoreCase("true") ) { 
  
  	    // your job code here...

    }
}

You could also just as well have hardcoded the location of your quartz-config.xml file into a java.net.URL object – and then grabbed that inputstream for xpath extraction.

4) Wrap your quartz Job class in an external servlet:

public class MyQuartzServlet extends GenericServlet { 
 
     private static final long serialVersionUID = 1L; 
     private static final Log log = LogFactory.getLog(MyQuartzServlet .class); 
 
     @Override 
     public void service(ServletRequest req, ServletResponse res)
          throws ServletException, IOException { 
 
          log.info("launching quartz job" from servlet); 
          QuartzJob  importJob = new QuartzJob (); 
          importJob.launch(); 
           
          // forward to some jsp, and/or add other job success/fail logic here
          getServletConfig().getServletContext()
                    .getRequestDispatcher("/servlet.jsp").forward(req,res); 
           
     } 
      
}

Of course you’d have to configure the servlet and servlet-mappings in your application’s web.xml, but that should be pretty straight forward.

Congrats, you now have a quartz job that loads an external configuration file, and that can also be invoked manually through a servlet. I’m not saying this is perfect and should be used in every possible quartz scenario, but this approach works well for quartz jobs where the properties might need overriding or temporary disabling. I can also understand the argument for why you would want to necessarily cycle every configuration change through qa. I hope at least this gives some folks idea outside the proverbial box. Now on to bigger fish to fry…

Apache XSL-FO’ sho v1.0

Transforming XML into PDFs.. and stuff

If you’ve ever been tasked with providing PDF documents via xsl, you’ve surely done some homework and shopped around for viable third party libraries. Some are good, some are great and rightly so charge a price, and some are just flat out incomplete or shanty in their documentation. It’s not a knock on anyone, its just a fact well known to open source developers. Historically what has been missing is an open standard for pdf generation, and possibly other output formats.

Enter XSL-FO: XSL Formatting Objects is an open standard for formatting documents in order to produce media artifacts such as PDF, postscript (PS), rich text format (RTF), and png files. Because it’s XML centric, you can marry your XML data to an XSL-FO stylesheet and perform a translation that will output a file in any of these format or others. XSL-FO is simply the XSL dialect used to lay out the document, and Apache FOP is the open source java based software you can use to process those transformations.

Apache FOP has been slowly making its complete debut over the past 3 years. Version 1.0 was finally released around the 12th of july, so its essentially a fresh release. Before that, .95 was the closest thing to production ready, but now that 1.0 is out, a more complete implementation awaits. There are still a few loose ends to tie up though, a complete rundown of FO compliance can be found on on apache’s XSL-FO compliance page

On with the examples:

The XML data

<block>
	<date>july 27th, 2010</date>
</block>

This is a very simple xml document, which we will be reading from in order to stamp the date onto a pdf document.

The XSL-FO layout

<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet 
	version="1.0"
	xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
	xmlns:fo="http://www.w3.org/1999/XSL/Format">
	
	<xsl:template match="/">
			
	<fo:root font-family="Verdana" font-size="12pt" text-align="center"
	    xmlns:fo="http://www.w3.org/1999/XSL/Format"
	    xmlns:fox="http://xml.apache.org/fop/extensions">
	
	<fo:layout-master-set>
	  <fo:simple-page-master master-name="master">
		<fo:region-body margin="0in"
	  		background-image="http://my.images.com/banner.jpg"
			background-repeat="no-repeat"
			background-position="center"  />
	  </fo:simple-page-master>
	</fo:layout-master-set>
	
	<fo:page-sequence master-reference="master">

	  <fo:flow flow-name="xsl-region-body">
		  
		<fo:block 
          	margin-top="50px"
          	margin-left="200px">
			Today's XML date is: <xsl:value-of select="/block/date"/>
		</fo:block>
		  
	  </fo:flow>
	</fo:page-sequence>
	
	</fo:root>
	  
  </xsl:template>

</xsl:stylesheet>

This is the XSL-FO layout we’ll be using to stamp on the pdf. It’s marked up using regular XSL-FO. Covering the syntax of XSL-FO is beyond the scope of this article, but there are plenty of resources and tutorials online such as the W3Schools.com XSL-FO and Renderx.com tutorials.

On with the java

Finally, we come to the java code and apache’s fop usage:

	protected void export() throws  IOException {
	
	    //Setup a buffer to obtain the content length
		FileOutputStream out = new FileOutputStream("C:/image/layout.pdf");
		
		try {
		    
			// generic files to String XML and XSL
			String xml = FileUtils.readFile("C:/image/banner.text.xml");
			String xsl = FileUtils.readFile("C:/image/banner.layout.fo"); 
			
	        // configure fopFactory as desired
	        FopFactory fopFactory = FopFactory.newInstance();
	        TransformerFactory factory = TransformerFactory.newInstance();
	        
		    //Setup FOP
		    Fop fop = fopFactory.newFop(MimeConstants.MIME_PDF, out);
	
		    //Setup Transformer
		    Source xsltSrc = new StreamSource(new StringReader(xsl));
		    Transformer transformer = factory.newTransformer(xsltSrc);
	
		    //Setup input
		    Source src = new StreamSource(new StringReader(xml));
	
		    //Make sure the XSL transformation's result is piped through to FOP
		    Result res = new SAXResult(fop.getDefaultHandler());	    
		    
		    //Start the transformation and rendering process
		    transformer.transform(src, res);
		    
		} catch (Exception e) {	
			e.printStackTrace();
		} finally {
			out.close();
		}
	}

Pretty straight forward xslt looking code. But what if we want to override the FOP PDFgeneration defaults? What if we want to produce a document not regular PDF page size, like a banner or if we want to produce a png image? Luckily, FOP offers a factory configuration mechanism we can use to customize the outputs.

Rendering the output as a PNG file

The java code is pretty much the same thing, with some small differences. First you’ll want to invoke the fopFactory.setUserConfig(String pathToConfig) method on the FoPFactory object. This will flag apache FOP to load a custom configuration from the specified file. Secondly you’ll need to set the exporting mime type to MimeConstants.MIME_PNG, as show in the java code snippet below.

// configure fopFactory as desired
FopFactory fopFactory = FopFactory.newInstance();
fopFactory.setUserConfig(new File(rootPath + "export.conf.xml"));
TransformerFactory factory = TransformerFactory.newInstance();

//Setup FOP
Fop fop = fopFactory.newFop(MimeConstants.MIME_PNG, out);

Lastly, you’ll want to define your export.conf.xml file. The only thing that you’d be changing that strays from the defaults would be the exported object’s dimensions (set in the example below to 150px length by 900px wide) and adding the renderer element that defines an “image/png” type. This renderer block flags the processor to export as PNG. At the moment the only other image export format is TIFF, but between these two, most purposes are likely met. It’s worth mentioning that FOP supports export into Postscript, PCL, AFP, RTF, XML, and TXT to name a few. More details can be found on Apache FOP’s Output Target page. Here’s the source:

<?xml version="1.0"?>

<fop version="1.0">

	<!-- Base URL for resolving relative URLs -->
	<base>.</base>

	<!--
		Source resolution in dpi (dots/pixels per inch) for determining the
		size of pixels in SVG and bitmap images, default: 72dpi
	-->
	<source-resolution>72</source-resolution>
	<!--
		Target resolution in dpi (dots/pixels per inch) for specifying the
		target resolution for generated bitmaps, default: 72dpi
	-->
	<target-resolution>72</target-resolution>

	<!--
		Default page-height and page-width, in case value is specified as auto
	-->
	<default-page-settings height="150px" width="900px" />

	<!-- Uses renderer mime type for renderers -->
	<renderers>

		<renderer mime="image/png">
		  <transparent-page-background>false</transparent-page-background>
		  <fonts><!-- described elsewhere --></fonts>
		</renderer>

	</renderers>

</fop>

So if you want to export to a different format, all you’d need to do is use a custom configuration and set the renderer formats to match the one you’d like to use, as well as override any default document properties you wish.

By leveraging an open standard like XSL-FO you can use different vendors for your pdf generation code, and while Apache’s FOP implementation isn’t 100% complete in its support for XSL-FO, it does do a good job of supporting what most folks will need on a daily basis. It’s nice to see a complete version release after a long wait.

Resources:
Apache FOP website. v1.0 Finally released on 7/12/2010?, yay!
Apache FOP compliance guide
XSL-FO Object Model documentation
Renderx.com“>Renderx.com tutorial on XSL-FO

There’s also the ultimate XSL-FO list of resources:
Whoishostingthis.com xsl-fo Resources

Sardine powered webdav client?

Extra Sardines on my pizza please

A few days ago I came across the need for an easy to use webdav client. Currently we’re using jakarta slide, which as it turns out is a project that was discontinued (as of fall 2007!), and whose code base as of this writing is practically 10 years old. Who wants those jars collecting dust in their lib directories? Sure it works, but hey, I’m trying to keep up with the Jones’ here, I’d like an up-to-date library that hasn’t been discontinued.

Dismayed, I took a look a the replacement suggested by the jakarta site – the Jackrabbit project which is a java based content repository API implementation (JCR, as outlined in JSR 170 and 283). Uh.. I’m not really looking to integrate a full fledged content repository into my project just so I can access some files on a webdav server. If I was building a CMS though, I’d be way more interested. All I was looking for was an easy way to access files on a webdav server.

Next I found Apache’s commons-vfs project but I was disappointed to find this note regarding webdav: “.. We can’t release WebDAV as we depend on an snapshot, thus it is in our sandbox.” (full page here, skip to “Things from the sandbox” ). Dammit! Guess I’ll have to keep looking..

Finally, I stumbled across Google’s Sardine project, an oasis in a desert of mismatched suitors. I practically feel guilty about rehashing whats already well documented, but I am compelled if only to underscore the ease of use.

Classpath Dependacies

At the minimum you’ll need commons-logging.jar, commons-codec.jar, httpcore-4.0.1.jar and httpclient-4.0.1.jar if you’re on Java6+. If you’re on Java5 you’ll need JAXB 2.1 and any dependancies. Luckily for you, the authors have included links to the JAXB jars and have included the other jars in the Sardine distribution so you can easily add them to your classpath.

Code Examples

Using Sardine is really simple, and pretty self explanatory. You must first call SardineFactory.begin() to initiate the webdav session. If you don’t have authentication enabled, you don’t need to provide the username/password parameters.

public List<DavResource> listFiles() throws SardineException {

	log.debug("fetching webdav directory");

	Sardine sardine = SardineFactory.begin("username", "password");
	List<DavResource> resources = sardine.getResources("http://webdav/dir/");

	return resources;
}

This List of DavResource objects is essentially meta data about the webdav files, which you can then use to perform whatever tasks you need.

Grabbing the contents of a file is just as easy:

	public InputStream getFile(String fullURL) throws SardineException {

		log.info("fetching webdav file");

		Sardine sardine = SardineFactory.begin("username", "password");
		return sardine.getInputStream("http://webdav/dir/file.txt");
	}

as is saving a file:

	public void storeFile(String filePath) throws IOException {
		
		Sardine sardine = SardineFactory.begin("username", "password");
		byte[] data = FileUtils.readFileToByteArray(new File(filePath));
		sardine.put("http://webdav/dir/filename.jpg", data);
	}

checking if a file exists:

	public boolean fileExists(String filePath) throws IOException {
		
		Sardine sardine = SardineFactory.begin();
		if (sardine.exists("http://webdav/dir/filename.jpg")) {
			return true;
		}

		return false;
	}

Other code examples can be found for deleting, moving files from one place to another, copying files so you end up with two, and creating directories in the user guide in the Sardine project page.

Overall, Sardine is simple, elegant, easy to use and pretty darned sexy, so check it out. I guess it’s time to update all that jakarta API related code…

Sending Attachments with the Javamail 1.4.x API

Make your emails interesting with attachments!

Not that your emails aren’t already interesting – if you have some kind of regular job running and you want to produce a results bound file sent to your recipients as an attachment, this code example can illustrate one way it can be done. It’s pretty much the same thing as sending a regular email except that it uses multipart attachments as the body content of the message:

package com.faceroller.mail;

public class Mailer {
	
	private static final Log log = LogFactory.getLog(Mailer.class);

	public static void send(Email email)
			throws MessagingException, NamingException, IOException {

		/**
		 * prefer the jndi lookup in your container, but when debugging
		 * manually setting properties explicitly will do
		 * 
		 */

		// InitialContext ictx = new InitialContext();
		// Session session = (Session) ictx.lookup("java:/Mail");

		Properties props = (Properties) System.getProperties().clone();
		props.put("mail.transport.protocol", "smtp");
		props.put("mail.smtp.host", host);
		props.put("mail.smtp.port", port);
		props.put("mail.debug", "true");

		/**
		 * create the session and message
		 * 
		 */
		Session session = Session.getInstance(props, null);

		/**
		 * set the message basics
		 * 
		 */
		MimeMessage message = new MimeMessage(session);
		Message.setFrom(InternetAddress.parse(email.getFrom(), false)[0]);
		message.setSubject(email.getSubject());
		message.setRecipients(
			javax.mail.Message.RecipientType.TO,
			InternetAddress.parse(email.getTo(), false)
		);


		/**
		 * multipart attachments here, part one is the message text, 
		 * the other is the actual file. notice the explicit mime type 
		 * declarations
		 * 
		 */
		Multipart multiPart = new MimeMultipart();

		MimeBodyPart messageText = new MimeBodyPart();
		messageText.setContent(email.getBodyAsText(), "text/plain");
		multiPart.addBodyPart(messageText);

		MimeBodyPart report = new MimeBodyPart();
		report.setFileName(email.getFileName());
		report.setContent(email.getAttachmentAsText(), "text/xml");
		multiPart.addBodyPart(report);

		MimeBodyPart rarAttachment = new MimeBodyPart();
		FileDataSource rarFile = new FileDataSource("C:/my-file.rar");
		rarAttachment.setDataHandler(new DataHandler(rarFile));
		rarAttachment.setFileName(rarFile.getName());
		multiPart.addBodyPart(rarAttachment);

		/**
		 * set the message's content as the multipart obj
		 */
		message.setContent(multiPart);


		/**
		 * do the actual sending here
		 * 
		 */
		Transport transport = session.getTransport("smtp");

		try {

			transport.connect(username, password);
			transport.sendMessage(message, message.getAllRecipients());

			log.warn("Email message sent");

		} finally {
			transport.close();
		}
	}
}

You’ll notice the first multipart’s content is a String with the mime type “text/plain”, this is what will render this part as the message’s body. You can set as many parts as you want, each one defined as a separate attachment. If you want to attach rar or zipped up archive, you can use the activation libraries to include them in as one of the parts. The MimeBodyPart will automatically detect and fill in the mime type for the file – it’s provided by FileDataSource. In JBoss if you’re using the container’s mail service, you can configure the mail server properties in the deploy/mail-service.xml file, and then you can use the initial context to get a handle on that configured mail server.

Get the jars and supporting docs from the Javamail site here: http://java.sun.com/products/javamail

Integrating Spring MVC 3.0 with JSR 303 (aka javax.validation.*)

Annotated POJO validation comes to a JDK near you!

The new annotated validation spec (jsr 303) is pretty slick, especially when used along side Spring MVC 3.0, and when backed by ejb3 entities. I’m pretty impressed with how easily it integrates with Spring MVC’s framework, and with how seamlessly error messages are passed to the form taglibs so they show up in your web forms.

I know some of you might argue that the current validation framework might not address complex validations, but after giving Hibernate’s reference implementation documentation a look, it seems interdependent validations are at least possible through embedded @Valid in complex objects. Even if you have to come up with your own really weird validation for a particular field, jsr 303/hibernate offers a way to create your own custom annotation driven validations. For the remaining 95% of all the other web forms, you’re probably going to be alright if you use the pre-defined validations offered by jsr 303.

Getting started

Download the jsr 303 reference implementation jars from SourceForge, via Hibernate’s download page. You’ll need to add the main, Hibernate validator jar (currently hibernate-validator-4.0.2.GA.jar as of 2/8/2009) and the included jars in the release jar’s lib directory to your application’s classpath if they’re not already there (if you’re on jboss 5.1, probably at least validation-api-1.0.0.GA.jar, maybe more). The Hibernate reference implementation release also includes the jar files required to run in a jdk 5 runtime, include those if you’re not running on jdk 6. Download Spring MVC from Spring’s download page, its part of the Spring 3.0 release. Spring MVC requires the following jars in your classpath:

    • org.springframework.asm
    • org.springframework.beans
    • org.springframework.context
    • org.springframework.context-support
    • org.springframework.core
    • org.springframework.expression
    • org.springframework.web
    • org.springframework.web.servlet

Wiring Spring MVC

You’ll need to make sure you map Spring MVC correctly. Consider the following in web.xml:

	<!-- Spring Action Servlet -->
	<servlet>
		<servlet-name>spring</servlet-name>
		<servlet-class>
			org.springframework.web.servlet.DispatcherServlet
		</servlet-class>
		<init-param>
			<param-name>contextConfigLocation</param-name>
			<param-value>
				/WEB-INF/spring-servlet.xml
			</param-value>
		</init-param>
		<load-on-startup>1</load-on-startup>
	</servlet>
	<servlet-mapping>
		<servlet-name>spring</servlet-name>
		<url-pattern>*.sf</url-pattern>
	</servlet-mapping>

And then in spring-servlet.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns:context="http://www.springframework.org/schema/context">
		
	<!-- Scans the class path of this application for @Components to deploy -->
	<context:component-scan base-package="com.faceroller.web" />
	
	<context:annotation-config/>

	<bean id="multipartResolver" 
	class="org.springframework.web.multipart.commons.CommonsMultipartResolver"/>

	<!-- flags annotations for processing -->
	<mvc:annotation-driven />

	<!-- forward "/" requests -->
	<mvc:view-controller path="/" view-name="welcome"/>

	<!-- Configures Handler Interceptors -->	
	<mvc:interceptors>
	<!-- Changes the locale when a 'locale' request parameter is sent -->
	<bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor"/>
	</mvc:interceptors>

	<!-- Saves a locale change using a cookie -->
	<bean class="org.springframework.web.servlet.i18n.CookieLocaleResolver" 
		id="localeResolver" />

	<!-- hey man, I like my jsp files in "/". WEB-INF just seems.. ugly -->
	<bean 
	  class="org.springframework.web.servlet.view.InternalResourceViewResolver">
	  <property name="viewClass" 
			value="org.springframework.web.servlet.view.JstlView"/>
	  <property name="prefix" value="/"/>
	</bean>
	
	<bean 
	  class="org.springframework.context.support.ResourceBundleMessageSource"
	  id="messageSource">  
		<property name="basename" value="messages" />
	</bean>  
	
</beans>

Now that we’ve squared away the setup, on with the examples:

Example ejb3 jsr-303 validation equipped entity bean

import javax.persistence.*;
import javax.validation.Valid;

@Entity
@Table(name="tb_inventory")
public class InventoryItem implements Serializable {

	private static final long serialVersionUID = 1L;
	
	@Id  
	@GeneratedValue(strategy=GenerationType.IDENTITY) 
	protected int id;
	
	@NotNull
	protected BigDecimal price = new BigDecimal("0.00");	
	
	@NotEmpty(message = "Name is a required field")
	protected String name;
	
	@Min(100)
	protected int minimumPrice;

	@Email
	private String mail;

	@Pattern(regexp="[a-z]+")
	private String lowerCaseName;
	
	@Future
	private Date futureDate;

	@AssertTrue
	private boolean mustBeTrue;

	@OneToMany
	@JoinTable(name="tb_order_items_to_images")
	@Valid
	protected List<InventoryImage> images = new ArrayList<InventoryImage>();
	
	
	// setters/getters here...
}

Lets examine the annotations:

@NotNull

@NotNull flags the annotated field as valid only if it has a value assigned. A String with a null value will fail, while a String with a “” value will pass. Since the “message” parameter is not defined, the error message will default to whatever the validation package ships with. In this case, I think the message will read “name is a required field”.

@NotEmpty(message = “Name is a required field”)

@NotEmpty flags the annotated field as valid if the field is both not null and not a value of “”. Since the “message” parameter is defined for this annotation, the param value will replace the default message passed into spring mvc’s error framework.

@Min(100)

@Min flags the field as valid only if it has a value equal to or higher than the value in the parens. The contrary to this is @Max, which will flag as valid values lower than the value in the parens.

@Email

@Email flags the field as valid only if the field is a valid email.

@Pattern(regexp=”[a-z]+”)

@Pattern will flag the field as valid only if the string matches the regular expression passed in as the parameter. In this case it will only pass if the string is made up only of lowercase letters.

@Future

@Future will flag as valid only if the date annotated is in the future. The contrary to this is @Past, which would be valid only if the date has already passed.

@AssertTrue

@AssertTrue will flag as valid if the annotated boolean resolves to true. The contrary to this is @AssertFalse, which will flag as valid only if the boolean resolves to false.

@Valid

@Valid will validate only if the complex object annotated validates as true. Lets say in this case that InventoryImage has two validation annotated fields; if any InventoryImage fails either of those two fields, then the enclosing InventoryItem will fail validation because of the @Valid annotation. This is how compelx cross object validations are supported, other than defining your own.

Now that we’ve annotated our bean, we’ll need to hook it into a Spring MVC controller.

The Spring MVC Controller

package com.faceroller.web;

@Controller
@RequestMapping("/inventory")
public class InventoryController {

	private static Log log = LogFactory.getLog(InventoryController.class);
	
	/**
	 * initialize the form
	 *
	 */
	@RequestMapping("/add/setup.go")
	public ModelAndView addInventorySetup(InventoryItem item){
		
		log.info("setup add inventory");
		return new ModelAndView("/inventory/add.jsp", "item", item);
	}

	/**
	 * process the form
	 *
	 */
	@RequestMapping(value="/add/meta.go", method=RequestMethod.POST)
	public String processInventoryMeta(
			@ModelAttribute("item") @Valid InventoryItem item, 
			BindingResult result) {
		
		log.info("process add inventory");
		
		if (result.hasErrors()) {
			return "/inventory/add.jsp";
		}

		InventoryService service = ServiceLocator.getInventoryService();
		service.addInventoryItem(item);
		
		return "redirect:/add/images/"+item.getId();
	}
	
	/**
	 * forward to whatever page you want
	 *
	 */	
	@RequestMapping("/browse/item/{itemId}")
	public ModelAndView getInventoryItem(@PathVariable int itemId){

		log.info("getting item");

		InventoryService service = ServiceLocator.getInventoryService();
		InventoryItem item = service.getInventoryItemById(itemId);

		return new ModelAndView("/inventory/browse.item.jsp", "item", item);
	}
	
}

Pay special attention to

	@RequestMapping(value = "/add/meta.go", method=RequestMethod.POST)
	public String processInventoryMeta(
			@ModelAttribute("item") @Valid InventoryItem item, 
			BindingResult result) 

You’ll notice @Valid marked up right before the InventoryItem item bean parameter. This is the annotation that does all the validation magic for us. There is no need to implement a custom validator factory, as spring mvc’s framework would normally require. If the bean fails validation, BindingResult result will be prepopulated with all corresponding JSR 303 validation errors. The catch is you have to add the @ModelAttribute(“item”) annotation to the signature, otherwise the form bean in the jsp will not have access to all the error messages passed along by the validations.

The jsp code

<form:form method="post" commandName="item" action="/process/form">
<table width="100%" border="0">
	<tr><td colspan="3" class="bottomPadding">
		<span class="secionHeader">Add item to inventory</span>
	</td></tr>
	<tr><td class="labelColumn" width="100">
		Price 
	</td><td width="100">
		<form:input path="price"/>
	</td><td>
		<form:errors path="price" cssClass="error"/>
	</td></tr>
	<tr><td class="labelColumn">
		Name
	</td><td>
		<form:input path="name"/>
	</td><td>
		<form:errors path="name" cssClass="error"/>
	</td></tr>
</table>
</form:form>

This is just a simple form, nothing new here, but I’m including for completeness. The Spring MVC framework will correctly populate the form tags with any bean errors should the form fail validation. The form tags are part of the standard spring taglibs, found in the org.springframework.web.servlet.* jar included in the Spring 3.0 distribution.

EDIT:
Stuart Gunter pointed out in a comment to this post that there is a workaround for injecting your own messages using spring’s autowiring. Click the jump for his example.

Resources
Hibernate 4.x Validation reference implementation
Spring MVC 3.0 documentation

Quartz Scheduled Jobs – v1.5.2

Java, XML, and cron driven scheduling made easy.

Projects here and there often need some kind of mechanism to schedule jobs at odd hours, or intervals. Quartz is a robust, flexible tool you can use to accomplish simple to complex job scheduling. There are a number of ways to use/configure quartz, but I’ve grown accustomed to using it with an xml based configuration. There are a few things we need to set up unfortunately, so there is a certain amount of plumbing we need to work out, but once that infrastructure is set up, its much less work to set up additional jobs.

web.xml

Originally, I went on about writing a custom quartz servlet to initialize the engine, but theres an even easier way to set this up as Sibz has pointed out in a comment:

<servlet>
	<servlet-name>
		QuartzInitializer
	</servlet-name>
	<display-name>
		Quartz Initializer Servlet
	</display-name>
	<servlet-class>
		org.quartz.ee.servlet.QuartzInitializerServlet
	</servlet-class>
	<load-on-startup>1</load-on-startup>
	<init-param>
		<param-name>config-file</param-name>
		<param-value>/some/path/my_quartz.properties</param-value>
	</init-param>
	<init-param>
		<param-name>shutdown-on-unload</param-name>
		<param-value>true</param-value>
	</init-param>
	<init-param>
		<param-name>start-scheduler-on-load</param-name>
		<param-value>true</param-value>
	</init-param>
</servlet>

This xml snippet was blatanlty hijacked from quartz’ documentation page. As you might have guessed, this xml configuration goes in your web.xml. No need to write your own initializer servlet, just plug and play.

We’ll need to add 2 property files. The one that fine tunes the engine in our example is quartz.properties..

quartz.properties

If you noticed in the web.xml (the init param named “config-file” is set to the path <param-value>/some/path/my_quartz.properties</param-value>), we load up a properties file that configures the quartz engine.

org.quartz.plugin.jobInitializer.class=org.quartz.plugins.xml.JobInitializationPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-config.xml
org.quartz.plugin.jobInitializer.overWriteExistingJobs = true
org.quartz.plugin.jobInitializer.failOnFileNotFound =true

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool 
org.quartz.threadPool.threadCount = 5 
org.quartz.threadPool.threadPriority = 5

The first half of the settings is pretty straight forward, but the second half is all about tuning. Setting a class other than SimpleThreadPool means you’ve written your own implementation for quartz thread management. You probably really know what you’re doing and you can stop reading. Threadcount controls the number of quartz threads dedicated to the engine. One is plenty for a job that fires off once or a few times a day. If you plan on running thousands a day, with heavy loads you’ll want something like 50 threads and up towards about 100. Threadpriority 5 means normal priority, while priority 1 is highest priority. For the most part, 5 is plenty, if you have cpu intensive processing going on, you can tune this to make sure your jobs fire off when they’re supposed

The second file we need to set up is the xml that configures your quartz job…

quartz-config.xml

<quartz>
    <job>
        <job-detail>
            <name>scheduler</name>
            <group>schedulers</group>
            <description>schedule a nightly job</description>
            <job-class>com.examples.quartz.Scheduler</job-class>
            <volatility>false</volatility>
            <durability>false</durability>
            <recover>false</recover>
			<job-data-map>
				<entry>
					<key>username</key>
					<value>someUser</value>
				</entry>
				<entry>
					<key>password</key>
					<value>somePassword</value>
				</entry>
			</job-data-map>            
        </job-detail>
        <trigger>
            <cron>
                <name>scheduler-trigger</name>
                <group>scheduler-triggers</group>
                <job-name>scheduler</job-name>
                <job-group>schedulers</job-group>
                <cron-expression>0 0/5 * * * ?</cron-expression>
            </cron>
        </trigger>
    </job>
</quartz>

This file is made up of two main sections. Job-Detail configures the job’s metadata, while trigger defines the configuration and cron expression that fires off the job. Stuff like the name, and mappings needed to configure the matching trigger, or the xml parser will complain. Parameters can be added in job-data-map and passed into the job-class for processing. Which brings us to the last item of business: THE JOB IMPLEMENTATION CLASS!!!

Scheduler.java

Scheduler is the job implementing class that defines the unit of work preformed by the quartz job. JobExecutionContext contains all the job metadata defined in the configuring xml, and the data map is the object that contains all the name/value pairs listed in the xml we just wrote up. Here’s the full class:

package com.examples.quartz;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.quartz.Job;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;

public class Scheduler implements Job {
    
	protected static final Log log = LogFactory.getLog(Scheduler.class);

	public void execute(JobExecutionContext jobContext) 
		throws JobExecutionException {

		log.info("entering the quartz config");

		JobDataMap map = jobContext.getJobDetail().getJobDataMap();
		String username = (String) map.get("username");
		String password = (String) map.get("password");

		log.info("mapped data: " + username + "/" + password);
	}

}

.. And that’s all there is to setting up a quartz jobs. If we want to add additional quartz jobs, all we would need to do is add another job node in our quartz-config.xml and write another job interface implementing class. The rest pretty much stays the same, since all the heavy lifting has been done.

Ejb3 Basics: Deploying Message Driven Beans

Farewell to lazy auto queue generation in JBoss 5

MDB’s were never so easy to deploy and manage when ejb3 first came out. In Jboss 4, all you have to do was annotate a class with @MessageDriven, sprinkle some meta data here and there, stick it in the oven and wham! Instant “I cant believe I made an MDB!?!” In Jboss AS 5 however, MDB queues are no longer automatically created for your application anymore on boot. An inspection of the MDB llifecycle illustrates why:

  1. MDB deploys
  2. No existing Topic/Queue
  3. Topic/Queue is automatically created
  4. MDB is undeployed
  5. There’s no callback/hook to remove the created Topic/Queue. And if there was, should undeploying the MDB even be allowed to trigger this action?

blatantly stolen from JBAS-5114, 5th comment down – thanks Andy, and DeCoste by proxy

SO to reiterate… whereas JBoss AS 4.0 would have auto-created and MDB queues for you on boot, in 5.0 this no longer holds true. Consider the following MDB:

package com.examples.mdb;

import javax.ejb.*;
import javax.jms.*;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

@MessageDriven(name = "MyQueue", activationConfig = {
        @ActivationConfigProperty(
        		propertyName = "destinationType", 
        		propertyValue = "javax.jms.Queue"),
        @ActivationConfigProperty(
        		propertyName = "destination", 
        		propertyValue = "queue/MyQueue"),
        @ActivationConfigProperty(
        		propertyName="DLQMaxResent", 
        		propertyValue="1")
})
public class MyQueue implements MessageListener {
	
	private static final Log log = LogFactory.getLog(MyQueue.class);
	
	public void onMessage (Message msg) {
		try {
			
			log.debug("Processing MyQueue queue...");
			ObjectMessage oMsg = (ObjectMessage) msg;
			
			SomeObject result = (SomeObject ) oMsg.getObject();

			/**
			 * do stuff with the object
			 */

		}
		catch (Exception e) {
			e.printStackTrace();
		}
	}
}

In jboss 4 you could leave your MDB class like this, and the app server would automatically handle everything for you. If you plan on using jboss 5.+ however, you will have to choose one of the following..

Wire it yourself in destinations-service.xml

In /deploy/messaging/destinations-service.xml, you can add the MDB destination yourself, letting jboss know to create your queue on boot. Here’s an example configuration:

<?xml version="1.0" encoding="UTF-8"?>
<!--
	Messaging Destinations deployment descriptor.
 -->
<server>

	<mbean 
		code="org.jboss.jms.server.destination.QueueService"
		name="jboss.messaging.destination:service=Queue,name=MyQueue"
		xmbean-dd="xmdesc/Queue-xmbean.xml">
		<depends optional-attribute-name="ServerPeer">
			jboss.messaging:service=ServerPeer
		</depends>
		<depends>
			jboss.messaging:service=PostOffice
		</depends>      
	</mbean>

</server>

The only thing this configuration needs to change is the queue name – make sure it matches the name of the queue annotated in your MDB class. This by itself is the closest you can get to being lazy. You will need to make sure however that you add one destination for each of the MDB queues your application uses. Option two requires a little bit more work but you don’t have to muck around with the jboss environment…

Add deployment descriptors to auto create the queue via jboss.xml

You can instead deploy the optional jboss.xml file in your ejb jar file’s META-INF folder (in addition to your persistence.xml file if you’re using entities). Your ejb jar structure should then look like this:

ejb-jar.jar
	- / ejb classes and cool stuff here
	- / META-INF
		- MANIFEST.MF
		- persistence.xml
		- jboss.xml

And this is what jboss.xml would look like:

<?xml version="1.0" encoding="UTF-8"?>
<jboss xmlns="http://www.jboss.com/xml/ns/javaee" 
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
       xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee 
                           http://www.jboss.org/j2ee/schema/jboss_5_0.xsd" 
       version="3.0"> 
	<enterprise-beans>
		<message-driven>
			<ejb-name>MyQueue</ejb-name>
			<destination-jndi-name>queue/MyQueue</destination-jndi-name>
			<create-destination>true</create-destination>
		</message-driven>
	</enterprise-beans>
</jboss>

The key command in this file here being: <create-destination>true</create-destination>. This will flag jboss to create the queue for you if it doesn’t already exist. This approach would probably be better suited for jboss only deployments since the flag to auto create the queue is configured within a jboss exclusive deployment descriptor – jboss.xml.

Once either of these has been implemented, your MDB should be deployed, initialized and ready to fire up. Oh, and fist pumps to ALR for pointing me in the right direction – cheers buddy!

Ejb3 basics: Entities

Entity Beans? Better than 2.1, I promise.

Ejb3 Entity beans are a type of enterprise java bean construct used to model data used by the ejb framework. The basic idea is to manipulate simple java objects, which represent in concrete terms your database data, and then have the framework handle as much of the plumbing as possible when you persist the data. Persisting means to store for later use in some data repository – usually some kind of database. By persisting these entities within the ejb framework we are able to abstract out tasks like updating a table and its associated foreign key table elements, perform queries and caching that automatically handles stuff like pre-populating java objects, and lots of the other boring stuff. In short, using entities in your application will allow you to work more on implementing business logic and less on wiring and mapping DAOs to TransferObjects. For the sake of completeness, the other two other important types of ejb beans should be mentioned: the Session and Message driven beans. In case it wasn’t obvious, ejb3 is only possible with java 1.5+ since that’s the release that initially introduced annotations into the java language.

One of the great things about ejb3 is that entities and persistence got a major overhaul from 2.1 = MAJOR SEXY TIME. Ejb3 does a really good job of simplifying the object model by using annotations in pojos to mark up entities. You can now model your entire data structure in terms of plain old java objects and their relationships, and the persistence engine will go and create all the necessary tables and sequencers and supporting schema elements.

Example Entity

Here’s an example of a bidirectional one to many relationship between User and Contact. Consider the following class:

package com.examples.entities;  

import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;

import javax.persistence.*;

@Entity
@Table(name="tb_user")
@SequenceGenerator(name = "sq_user",sequenceName = "sq_user", initialValue=1)
public class User implements Serializable {

	private static final long serialVersionUID = 1L;
	
	@Id
	@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="sq_user")
	protected Long id;

	@Column(name="user_name", nullable=false, length=32)
	protected String username;
	protected String password;
	protected String email;	
	
	@OneToMany(mappedBy="user")
	@JoinTable(name="tb_user_contact") 
	protected List<Contact> contacts = new ArrayList<Contact>();
 
	public Long getId() {
		return id;
	}
	public void setId(Long id) {
		this.id = id;
	}	
	
	public String getUsername() {
		return username;
	}
	public void setUsername(String username) {
		this.username = username;
	}
	public String getPassword() {
		return password;
	}
	public void setPassword(String password) {
		this.password = password;
	}	
	public String getEmail() {
		return email;
	}
	public void setEmail(String email) {
		this.email = email;
	}
	
	public List<Contact> getContacts() {
		return contacts;
	}
	public void setContacts(List<Contact> contacts) {
		this.contacts = contacts;
	}

}

This is a fairly common type of entity. Going from top to bottom, lets take a look at the annotations used and examine what they do.

@Entity

@Entity is the annotation that marks this particular java class as an ejb entity. This tells the persistence engine to load up this class and its associated annotations and use it as a model for data in the database. Technically this is the only annotation required in the class for a very simple entity, but there are other annotations we can use to customize and declare more complex relationships.

@Table(name=”tb_user”)

@Table lets you name the table modeled by your pojo. Its just a simple way to keep things organized in the database. If you don’t specify the table name it will default to the class name.

@SequenceGenerator(name = “sq_user”,sequenceName = “sq_user”, initialValue=1)

@Sequence lets you set up the sequence used for the primary key generation. This is required when you choose GeneratorType.SEQUENCE as your primary key generator. The name must match the @GeneratedValue’s name value. This is how the persistence engine knows how to map the sequence to the column.

@Id

@Id indicates that the following class method or field will map the table’s primary key.

@GeneratedValue(strategy=GenerationType.SEQUENCE, generator = “sq_user”)

@GeneratedValue maps the type of primary key incrementing strategy to use when adding new records to the database. Here are the possible strategies:

  • GenerationType.AUTO
    This indicates that the persistence engine will decide what incrementing strategy to use. Lazy man multiple vendor option.
  • GenerationType.IDENTITY
    This indicates that the persistence engine should use the identity column for incrementing. Vendors that can use this a ones that set “AUTO-INCREMENT” value type of flag to true. MySQL is one example of a vendor that can use this type.
  • GenerationType.SEQUENCE
    This tells the persistence engine to use a sequence to manage the increment values when inserting new values into the table. Postgres is an example of a vendor that uses sequences.
  • GenerationType.TABLE
    This tells the persistence engine to use a separate table to track increments on the primary key. This is more of a general strategy than a vendor specific implementation.

@Column(name=”user_name”, nullable=false, length=32)

@Column allows you to define column attributes for each class field. You can choose to define all of the possible relevant attributes or just the ones that you want to define. Other possible attributes are:

  • columnDefinition=”varchar(512) not null”
    Allows you to define native sql to your column definition
  • updatable=false
    Sets the column to allow updates or not. If it is not explicitly set to false, it will default to true, allowing updates to happen.
  • precision=10
    Decimal precision
  • scale=5
    Decimal scale
  • unique=true
    Defines if the column should contain only unique values.
  • table=”tb_user”
    Maps the table name for which this column belongs.

@OneToMany(mappedBy=”user”)

@OneToMany lets the persistence engine know that this field or method has a one to many type of relationship with the mapped object and the mappedBy attribute lets the persistence engine know the foreign key used when mapping the relationship. It will then set up any necessary relationship tables needed to express the relationship. This would normally include creating a separate table to hold all the key mappings.

@JoinTable(name=”tb_user_contact”)

@JoinTable lets you define the join table’s properties. In this case we’re using it to name the join table mapping the one to many relationship. A more complete @JoinTable annotation looks like this:

	@OneToMany(mappedBy="user")
	@JoinTable(
	    name="tb_user_contact",
	    joinColumns=@JoinColumn(name="user_id",referencedColumnName="id"),
	    inverseJoinColumns=@JoinColumn(name="contact_id",referencedColumnName="id")
	)
	public List<Contact> getContacts() {
		return contacts;
	}

This covers the owning class, here’s the class being pwnt:

import javax.persistence.*;

@Entity
@Table(name="tb_contact")
public class Contact {

	@Id
	@GeneratedValue(strategy=GenerationType.IDENTITY)
	protected Long id;
	protected String email;	

	@ManyToOne
	protected User user;
	
	
	public Long getId() {
		return id;
	}
	public void setId(Long id) {
		this.id = id;
	}
	public String getEmail() {
		return email;
	}
	public void setEmail(String email) {
		this.email = email;
	}
	public User getUser() {
		return user;
	}
	public void setUser(User user) {
		this.user = user;
	}
	
}

@ManyToOne

@ManyToOne annotation implies the connecting foreign key used in the bidirectional mapping. When the persistence engine reads all the entities in and starts generating all the sql to model the object model it will generate three tables between these two java classes. One table “tb_user” will represent the user class, “tb_contact” will represent the contact class, and finally, “tb_user_contact” which represents the relationship mapping table. This annotation is what turns a unidirecitonal relationship into a bidirectional relationship. Here’s an example:

	@ManyToOne
	public User getUser() {
		return user;
	}

@ManyToMany

@ManyToMany describes the many to many association between entities. It is used in conjunction with the @JoinTable annotation to define the mapping table used for storing all the relationships. Here’s an example:

	@ManyToMany
	@JoinTable(name="tb_user_contact")
	public List<Contact> getContacts() {
		return contacts;
	}

and then in the Contact class we would have:

	@ManyToMany(mappedBy="contacts")
	public User getUser() {  
 		return user;  
	}  

The owning entity will always have the @JoinTable, and the owned entity will always have the @ManyToMany(mappedBy=?) annotation.

These are just a few things that can be done with ejb3. I would suggest sifting through the java 5 javadocs to get a better feel for the other possible annotations.

For more reading:
Javax Persistence API
Java 5 Persistence Tutorial
Official Java Persistence FAQ

War deployment file structure

What’s a war deployment, do I need my own army?

When it comes to deploying an web based application we have a few options on the table. Well only one really if you stick to J2EE standards, not counting Ear deployments which also deploy web apps via wars. Outside the world of J2EE though, it becomes a crap shoot based on the web framework you’re using. Maybe you ftp your files manually, edit html directly on the server, or upload all your files and rename the folders so the new code is live and the old code is no longer accessible. In the J2EE world, we use deployable artifacts like war files. A war file is basically a collection of files structured in a certain way, that is zipped up. A war file can also be exploded, which means it’s simply not zipped up. So what’s a war look like?

webapp.war
	|-- images/
	|   `-- banner.jpg
	|-- index.html
	`-- WEB-INF/
		|-- jsps/
		|   |-- public/
		|   |   `-- login.jsp
		|   `-- private/
		|       |-- application.jsp
		|       `-- settings.jsp
	    |-- lib/
	    |   `-- some-library.jar
	    |-- classes/
	    |   `-- compiled.class
	    `-- web.xml

There are 2 sections which pretty much divide up the entire archive. All the stuff directly inside the root / of the war file, and then everything that’s inside the WEB-INF directory. The key difference between the two is one is publicly accessible while the other one has protected access; it’s a violation of the spec for an application server to allow public exposure to anything in the WEB-INF folder of your application.

Context

Your war file has an application context. An application context is the reserved namespace your web application has in relation to the application server’s qualified domain name. For example, if on startup you bound jboss to the localhost domain your server’s fully qualified url would be:

http://localhost:8080/

This represents the root of your application server. If you are deploying a single war/a single web application, by default your application will take on the context name of the war file. So in our example above, if we wanted to access webapp.war’s deployed application we would need to call it this way:

http://localhost:8080/webapp

Jboss only!

Out of the box, jboss comes with a default ROOT.war application in the deploy directory that links to other jboss web applications. One great thing about jboss is you can set up your configuration instance to deploy whatever components you want, meaning you can remove this ROOT.war file and use your own as the context root. You would need to replace the default ROOT.war file with the contents of your war file to make your application use the same context. This is kind of messy though, so I would recommend just removing the ROOT.war file and instead stick a jboss-web.xml file in your war’s WEB-INF directory configured like this:

 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.3//EN" 
    "http://www.jboss.org/j2ee/dtd/jboss-web_3_0.dtd">

<jboss-web>

   <context-root>/</context-root>

</jboss-web>

The context-root element here basically tells jboss to load up the war file into the root context of the application server, so calls to “http://localhost:8080/” will be processed by your war file. There’s also a way to map virtual hosts in jboss, discussed in another article, Virtual hosting with Jboss.

Compiled Resources

The other 2 things that need to go into the WEB-INF directory are the WEB-INF/lib and WEB-INF/classes directories. The /lib directory is a place where you put all of your web application’s third party jar files, as well as any jar’d up version of your custom code and resources. If you choose not to jar up all your custom code and resources you can then stick all your .class files and application resources in the WEB-INF/classes directory. From my point of view, its cleaner to just jar everything up and stickem in the /lib directory. Naked class files and resources are so.. messy.. But that’s just my opinion. It’s important to note, if you have empty /lib or /class directories you don’t need to include them in your deployment, they are only required if you are going to stick resources in there.

Static Resources

Now that you’ve figured out all your application resources, you can then then stick all your static resources in the root of your war file. I should point out that there are two sides of the fence about how you should proceed about this though; purists think everything but the basics need to be obscured from the user to prevent them from hacking urls/jsps (by sticking jsps in the WEB-INF, hiding them from exposure) while other folks don’t really care. I think the folks don’t really care that much because they’re using a web framework that hides the true jsp paths and file names. If you’re not using a web framework, it better be for a good reason. You then might want to consider obscuring the jsps in the WEB-INF.

That’s pretty much all there is to a war’s file structure. When it comes time to deploy, most of the time the war file is deployed as a zipped up archive. Jboss also supports the notion of exploded wars, which is basically just an unzipped war file. Exploded wars are like a double edges sword though – if you deploy as an exploded war you get the benefit of not having to redeploy the entire application if you want to fix something like text on a page. Be wary though, circumventing a build process is never a good idea. Build process is there for a reason, its purpose is to track code, updates and and make sure only tested code is released.

Java, XML and XStream

What’s an object/xml serializaing/deserializaing library?

If you’ve never worked with an object/xml serializer and are considering writing your own from scratch, you may want to consider using a library like XStream. XStream is very good at moving java into xml and back. It allows a high level of control over how the xml can be organized and structured and even allows the user to create their own converters for even more flexibility.

But still, why use something like this when you can be perfectly happy writing your own data conversion scheme? The problem really boils down to flexibility, and in reinventing the wheel. Ninety percent of the time you’re already interrogating a datasource (like some rdbm system like oracle, postgres or mysql) and will be using some kind of TransferObject or maybe an Entity persistence scheme built around pojos. If you write your own serializing engine from scratch by mapping pojos to dom4j nodes, constructing Document objects and then using them for stuff like xsl transformations, you end up missing out on a great tool.

It may not seem obvious right now, but a homegrown serializer is the kind of thing you can write once and forget about and then months or years down the line, when it comes time to update your data model or expand its framework, you end up rebuilding all the dom4j stuff from scratch. Unless you take the lazy route and append and new xml to the root node to save yourself the entire node refactor. Maybe simple objects with one or two simple nested objects wont seem like much, but if your object becomes anything close to approaching a complex lattice, then going back and tweaking the entire structure when you want to expand or refactor your xml can become quite perilous. Especially if you want to make your xml as xpath friendly as possible.

Edit:
As Felipe Gaucho has been kind enough to point out, Xstream only writes a text string as the serialized object. It will not preform any validation on your XML, so you’re left on your own validate it post serialization. Something like JAXP comes to mind to tackle XSD based validation, or JiBX if you’re looking for Data Binding.

So what does XStream do for me?

Consider these objects:

 public class MyClass {

	protected MyObject object;
	
}
public class MyObject {

	protected ArrayList Field;
	
}

XStream lets you do something like this if you want to serialize an object like MyClass to xml:

 XStream xstream = new XStream();
String myClassXML= xstream.toXML(myClassObject);

and if you want to go from xml back to a java object you can do this:

 XStream xstream = new XStream();
MyClass myClassObject= xstream.fromXML(myClassXML);

As you can see, all the plumbing goes away and you are now free to concentrate on writing the rest of your application. And if you want change your object model, consolidate nodes or rearrange the structure of your xml, all you have to do is update your pojo and your xml immediately will reflect the updated changes in the data model on serialization.

It should be noted that to completely deserialize xml, your object needs to correctly map all the data in the xml. If you have trouble deserializing try building a mock object and populating it with sample values and then serialize it to xml; then you can compare the test xml to what your actual xml is and make your changes.

Alilasing

XStream does not require any configuration, although the xml produced out of the box will likely not be the easiest to read. It will serialize objects into xml nodes according to their package names, usually making them very long as we can see from the following example:

<com.package.something.MyClass>
	<com.package.something.MyObject>
		<List>
			<com.package.something.Field/>
			<com.package.something.Field/>
		</List>
	</com.package.something.MyObject>
</com.package.something.MyClass>

Luckily XStream has a mechanism we can use to alias these long package names. It goes something like this:

XStream xstream = new XStream();
xstream.alias("MyClass", MyClass.class);
xstream.alias("MyObject", MyObject.class);
xstream.alias("Field", Field.class);

Adding an alias like this will let your xml come across nice and neat like this:

 <MyClass>
	<MyObject>
		<List>
			<Field/>
			<Field/>
		<List>
	</MyObject>
</MyClass>

Attributes

If you want to make a regular text node an attribute, you can use this call to configure it:

 xstream.useAttributeFor(Field.class, "name");

This will change make your xml change from this:

 <MyClass>
	<MyObject>
		<List>
			<Field/>
				<name>foo</name>
			<Field/>
		<List>
	</MyObject>
</MyClass>

into

 <MyClass>
	<MyObject>
		<List>
			<Field name="foo"/>
			<Field/>
		<List>
	</MyObject>
</MyClass>

ArrayList (implicit collections)

ArrayLists are a little tricker. This is what they look like out of the box:

 ...
	<MyObject>
		<List>
			<Field/>
			<Field/>
		</List>
	<MyObject>
...

Note theres an extra “List” node enclosing the List elements name “Field”. If we want to get rid of that node so that Field is right under Object, we could tell XStream to map an implicit collection by doing the following:

 xstream.addImplicitCollection(MyObject.class, "Field", "Field", Field.class);

where the addImplicitCollection method signature is the following:

 /**
	 * Appends an implicit collection to an object for serializaion
	 * 
	 * @param ownerType - class owning the implicit collection (class owner)
	 * @param fieldName - name of the field in the ownerType (Java field name)
	 * @param itemFieldName - name of the implicit collection (XML node name)
	 * @param itemType - item type to be aliases be the itemFieldName (class owned)
	 */
	public void addImplicitCollection(Class ownerType,
            String fieldName,
            String itemFieldName,
            Class itemType) 

Adding this implicit collection configuration will streamline the xml so that it looks like this now:

 
<MyClass>
	<MyObject>
		<Field/>
		<Field/>
	</MyObject>
</MyClass>

Notice the “List” node is gone, and “Field” is now directly under “MyObject”. You can find the complete documentation on the XStream website here.

There are plenty of more tricks you can use to configure/format your xml, and there are plenty of examples listed on the XStream website, but these three points here should cover the basics to get you started.