Java 8 MOOC – Session 3 Summary

Last night was the final get-together to discuss the Java 8 MOOC. Any event hosted in August in a city that is regularly over 40°C is going to face challenges, so it was great that we had attendees from earlier sessions plus new people too.

<img src="; alt="Woohoo lambdas!" title="Woohoo lambdas!">

The aim of this session was to talk about Lesson 3, but also to wrap up the course as a whole: to talk about what we liked and what we would have improved (about both the course itself and our user group events).

As in the previous two posts, let’s outline our discussion areas:

findAny() vs findFirst(): Why do we need both of these methods, and when would you use them?

Well, findFirst() is the deterministic version, which will return you the first element in the Stream (according to encounter order - see the section on Ordering in the documentation). So, regardless of whether you run the operation in parallel or serial, if you’re looking for “A” and use findFirst with this list:

["B", "Z", "C", "A", "L", "K", "A", "H"] 

you’ll get the element at index 3 - the first “A” in the list.

But findAny() is non-deterministic, so will return you any element that matches your criteria - it could return the element at index 3, or the one at position 6. Realistically, if the stream is on an ordered collection like a list, when you run findAny on a sequential stream, I expect it will return the same result as findFirst. The real use-case for findAny is when you’re running this on a parallel stream. Let’s take the above list, and assume that when you run this on a parallel stream it’s processed by two separate threads:

["B", "Z", "C", "A",    // processed by thread 1   "L", "K", "A", "H"]     // processed by thread 2 

It’s possible that thread 2 finds its “A” (the one at position 6) before thread 1 finds the one at position 3, so this will be value that’s returned. By configuring the Stream to return any one of the values that matches the criteria, you can potentially execute the operation faster when running in parallel.

If findAny is (potentially) faster in parallel and (probably) returns the same value as findFirst when running in serial, why not use that all the time? Well, there are times when you really do want the first item. If you have a list of DVDs ordered by year the film was released, and you want to find the original “King Kong” (for example), you’ll want findFirst to find the one released in 1933, not the one that was released in 1976 or the one from 2005.

Plus, findFirst is not always going to be slower than findAny, even in parallel. Going back to our list:

["B", "Z", "C", "A", "L", "K", "A", "H"] 

Trying to findFirst or findAny for “H” could be the same performance for both methods.

Collectors: Maybe it’s just me who doesn’t really see the big picture for collectors. I’m perfectly content with the built in collectors like:




It’s easy to see what they do, and work out when you need to use them.

I’m also very happy to have discovered joining:


a super-useful way to create Comma Separated Values (CSVs) that I use in my Java 8 demo.

Where things get a bit murky for me is where we start chaining up collectors:


(it should be obvious from my lack of clear example that I’m not 100% certain under which circumstances these are useful).

As a group, we think the chained collectors are kinda ugly - not because we’re against chaining (we like Streams), but maybe because it’s another chain inside a param to a chain.

We think this is an area where some good, solid examples and a bit of daily use will make it much clearer to developers. We hope.

Related to this, the course didn’t go into creating your own collectors at all. My personal (under-informed) opinion is that I guess most developers should be able to use either the out-of-the-box collectors (toList etc) or use the collector chaining to build what they need. If you need a custom collector, perhaps you haven’t considered everything that’s already available to you. But as a group, we decided we would have liked to see this topic anyway so that we could get a deeper understanding of what collectors are and how they work.

Exercises for lesson 3: Well. What can we say? I really hope there are people reading this who haven’t finished the course yet, because the Sevilla Java User group would like to say to you: don’t despair, the lesson 3 exercises are substantially harder than those for lessons 1 and 2. Honestly, the whole group considered it less of a learning curve and more of a massive cliff to climb.

I have no idea what I am doing

I mean, it was great to have something so challenging to end on, but it probably would have been less ego-destroying if we could have got up to that level gradually instead of having it sprung on us.

The good thing about Part 2 of the lesson 3 exercises was that we had three very different answers to discuss in the group. None of us were super happy with any of them, but we could see definite pros and cons of each approach, and that’s something you really want to learn in a course like this.

It was also really great to have a rough performance test to run on your own computer, so that you could really see the impact of your choices on the performance of the stream.

For more info
I’m going to add a shameless plug to a friend’s book here. I’ve been reading a lot about Java 8 for this course, for my Java 8 demo, and to generally get up to speed. My favourite book for getting to grips with lambdas and streams is Java 8 Lambdas: Pragmatic Functional Programming by Richard Warburton. This book also contains more info about collectors too, so maybe some of our questions around how to use these in more complex situation are answered in here.

In Summary
We really enjoyed the MOOC, and the sessions to get together to discuss it. We particularly liked that the meetups were a safe place to ask questions and discuss alternative solutions, and that we weren’t expected to be genius-level experts in order to participate fully.

If/when Oracle re-runs the MOOC, if you didn’t get a chance to take part this time I highly recommend signing up. And if you can find (or run) a local meetup to discuss it, it makes the experience much more fun.

Java 8 MOOC – Session 2 Summary

As I mentioned last week, the Sevilla Java User Group is working towards completing the Java 8 MOOC on lambdas and streams. We’re running three sessions to share knowledge between people who are doing the course.

The second week’s lesson was about Streams - how you can use the new stream API to transform data. There was also a whole section on Optional, which initially seemed like rather a lot, but it turns out that Optional can do rather more than I originally thought.

In the meetup session, we talked about:
Optional: we were pretty comfortable, I think, with using Optional to prevent a NullPointerException. What we weren’t so clear on were the examples of filter() and map() - if you were getting your Optional values from a stream, why wouldn’t you do the map and the filter on the stream first? For example, why do this:     .findFirst()     .map(String::trim)     .filter(s -> s.length() > 0)     .ifPresent(System.out::println); 

when you could map and filter in the stream to get the first non-empty value? That certainly seems like an interesting question in relation to streams.

I can see Optional being more useful when other APIs fully support Java 8 and return Optional values, then you can perform additional operations on return values.

That terminal operation’s not actually terminal??: We ran into this a couple of times in our examples in the session, one example is the code above (let’s copy it down here so we can look at it more closely):     .findFirst()     .map(String::trim)     .filter(s1 -> s1.length() > 0)     .ifPresent(System.out::println); 

Isn’t findFirst() a terminal operation? How can you carry on doing more operations on that?

The answer is, of course, that the return type of the terminal operation can also lead to further operations. The above is actually:

Optional<String> result =                               .findFirst();       .filter(s1 -> s1.length() > 0)       .ifPresent(System.out::println); 

Our terminal operation returns an optional, which allows you to do further operations. Another example of this confusion:     .map(String::toLowerCase)     .collect(toList())     .forEach(System.out::println); 

Here, collect() is a terminal operation, but it returns a list, which also allows forEach():

List<String> results =                            .map(String::toLowerCase)                            .collect(toList()); results.forEach(System.out::println); 

So be aware that just because it’s called a terminal operation, doesn’t mean you can’t perform other operations on the returned value.

Parallel/sequential/parallel: there had been a question in the previous week about why you could write code like this:     .parallel()     .map(String::trim)     .sequential()     .filter(s1 -> s1.length() > 0)     .parallel()     .forEach(System.out::println); 

and whether that would let you dictate which sections of the stream were parallel and which were to be processed in serial. Lesson two set the lesson straight, declaring “the last operator wins” - meaning all of the above code will be run as a parallel stream. I can’t find any documentation for this, I’ll edit this post if I locate it.

Unordered: “Why would you ever want your stream to be unordered?” - the answer is that unordered() doesn’t turn your sorted collection into one with no order, it just says that when this code is executed, the order of elements doesn’t matter. This might make processing faster on a parallel stream, but as a group we figured it would probably be pointless on a sequential stream.

Efficiency optimisations and order of stream operations: We had a long conversation about the order in which you perform operations in a stream. The MOOC (in fact, most documentation around Streams) tells us that a) streams are lazy, and not evaluated until a terminal operator is encountered and b) this enables optimisation of the operations in the stream. That lead to a discussion about the following code:     .map(String::toLowerCase)     .filter(s -> s.length() % 2 == 1)     .collect(toList()); 

The filter operation should result in less items to process in the stream. Given that the map() operation doesn’t change anything that filter() relies on, will this code be optimised somehow under the covers so that the filter is actually executed first? Or are optimisations still going to respect the order of operations on a stream?

Our case is actually a very specific case, because a) the map() returns the same type as the params passed in (i.e. it doesn’t map a String to an int) and b) the map() doesn’t change the characteristic the filter() is looking at (i.e. length). But generally speaking, you can’t expect these conditions to be true - in fact I bet in a large number of cases they are not true. So pipeline operations are performed in the order in which they are written, meaning that our map and filter will not be re-ordered into a more efficient order.

A good rule of thumb seems to be to do filtering as early in the stream as possible - that way you can potentially cut down the number of items you process in each step of the stream. Therefore our code would probably be better as:     .filter(s -> s.length() % 2 == 1)     .map(String::toLowerCase)     .collect(toList()); 

**Flat Map**: what...? [flatMap()]( is one of those methods that makes total sense once you get the hang of it, and you don't understand why it was so confusing. But the first time you encounter it, it's confusing - how is flatMap() different to map()?

Well, flatMap is used to squish (for example) a stream of streams into just a simple stream. It’s like turning a 2-dimensional array into a single dimension so that you can iterate over all the items without needing nested for-loops. There’s an example on StackOverflow, and some more examples in answer to this question.

Comparators: We’ve probably all written comparators at some point, it’s probably one of those examples where we really did use anonymous inner classes “in the olden days” and were looking forward to replacing them with lambdas.

reader.lines()       .sorted(new Comparator<String>() {           @Override           public int compare(String o1, String o2) {               return ???;           }       })       .collect(toList()); 

Sadly, using a lambda still doesn’t answer the question “do I minus o1 from o2, or o2 from o1?":

reader.lines()       .sorted((o1, o2) -> ??? )       .collect(toList()); 

But there’s yet another new method in Java 8 here that can save us, one that is not nearly as well publicised as it should be. There’s a Comparator.comparing() that you can use to really easily define what to compare on. The JavaDoc and signature looks kinda confusing, but this is one of those places where method references suddenly make loads of sense:

reader.lines()       .sorted(comparingInt(String::length))       .collect(toList()); 

(Here we’re actually using the comparingInt method as we’re going to compare on a primitive value). Personally this is one of my favourite new features in Java 8.

Join us next week for the [last session on Java 8 - Lambdas and Streams](

Using Groovy to import XML into MongoDB

This year I’ve been demonstrating how easy it is to create modern web apps using AngularJS, Java and MongoDB. I also use Groovy during this demo to do the sorts of things Groovy is really good at - writing descriptive tests, and creating scripts.

Due to the time pressures in the demo, I never really get a chance to go into the details of the script I use, so the aim of this long-overdue blog post is to go over this Groovy script in a bit more detail.

Firstly I want to clarify that this is not my original work - I stole borrowed most of the ideas for the demo from my colleague Ross Lawley. In this blog post he goes into detail of how he built up an application that finds the most popular pub names in the UK. There’s a section in there where he talks about downloading the open street map data and using python to convert the XML into something more MongoDB-friendly - it’s this process that I basically stole, re-worked for coffee shops, and re-wrote for the JVM.

I’m assuming if you’ve worked with Java for any period of time, there has come a moment where you needed to use it to parse XML. Since my demo is supposed to be all about how easy it is to work with Java, I did not want to do this. When I wrote the demo I wasn’t really all that familiar with Groovy, but what I did know was that it has built in support for parsing and manipulating XML, which is exactly what I wanted to do. In addition, creating Maps (the data structures, not the geographical ones) with Groovy is really easy, and this is effectively what we need to insert into MongoDB.

Goal of the Script

  • Parse an XML file containing open street map data of all coffee shops.
  • Extract latitude and longitude XML attributes and transform into MongoDB GeoJSON.
  • Perform some basic validation on the coffee shop data from the XML.
  • Insert into MongoDB.
  • Make sure MongoDB knows this contains query-able geolocation data.

The script is PopulateDatabase.groovy, that link will take you to the version I presented at JavaOne:


Firstly, we need data

I used the same service Ross used in his blog post to obtain the XML file containing “all” coffee shops around the world. Now, the open street map data is somewhat… raw and unstructured (which is why MongoDB is such a great tool for storing it), so I’m not sure I really have all the coffee shops, but I obtained enough data for an interesting demo using*[amenity=cafe][cuisine=coffee_shop] 

The resulting XML file is in the github project, but if you try this yourself you might (in fact, probably will) get different results.

Each XML record looks something like:

<node id="178821166" lat="40.4167226" lon="-3.7069112">     <tag k="amenity" v="cafe"/>     <tag k="cuisine" v="coffee_shop"/>     <tag k="name" v="Chocolatería San Ginés"/>     <tag k="wheelchair" v="limited"/>     <tag k="wikipedia" v="es:Chocolatería San Ginés"/> </node> 

Each coffee shop has a unique identifier and a latitude and longitude as attributes of a node element. Within this node is a series of tag elements, all with k and v attributes. Each coffee shop has a varying number of these attributes, and they are not consistent from shop to shop (other than amenity and cuisine which we used to select this data).


<img src="; alt="Script Initialisation" title="Script Initialisation">

Before doing anything else we want to prepare the database. The assumption of this script is that either the collection we want to store the coffee shops in is empty, or full of stale data. So we’re going to use the [MongoDB Java Driver] ( to get the collection that we’re interested in, and then drop it.

There’s two interesting things to note here:

  • This Groovy script is simply using the basic Java driver. Groovy can talk quite happily to vanilla Java, it doesn’t need to use a Groovy library. There are Groovy-specific libraries for talking to MongoDB (e.g. the MongoDB GORM Plugin), but the Java driver works perfectly well.
  • You don’t need to create databases or collections (collections are a bit like tables, but less structured) explicitly in MongoDB. You simply use the database and collection you’re interested in, and if it doesn’t already exist, the server will create them for you.

In this example, we’re just using the default constructor for the MongoClient, the class that represents the connection to the database server(s). This default is localhost:27017, which is where I happen to be running the database. However you can specify your own address and port - for more details on this see Getting Started With MongoDB and Java.

Turn the XML into something MongoDB-shaped

<img src="; alt="Parse & Transform XML" title="Parse & Transform XML">

So next we’re going to use Groovy’s XmlSlurper to read the open street map XML data that we talked about earlier. To iterate over every node we use: xmlSlurper.node.each. For those of you who are new to Groovy or new to Java 8, you might notice this is using a closure to define the behaviour to apply for every “node” element in the XML.

Create GeoJSON

<img src="; alt="Create GeoJSON" title="Create GeoJSON"> Since MongoDB documents are effectively just maps of key-value pairs, we’re going to create a Map coffeeShop that contains the document structure that represents the coffee shop that we want to save into the database. Firstly, we initialise this map with the attributes of the node. Remember these attributes are something like:

<node id="18464077" lat="-33.8911183" lon="151.1958773"> 

We’re going to save the ID as a value for a new field called openStreetMapId. We need to do something a bit more complicated with the latitude and longitude, since we need to store them as GeoJSON, which looks something like:

{ 'location' : { 'coordinates': [<longitude>, <latitude>],                  'type'       : 'Point' } } 

In lines 12-14 you can see that we create a Map that looks like the GeoJSON, pulling the lat and lon attributes into the appropriate places.

Insert Remaining Fields

<img src="; alt="Insert Remaining Fields" title="Insert Remaining Fields"> <img src="; alt="Validate Field Name" title="Validate Field Name"> Now for every tag element in the XML, we get the k attribute and check if it’s a valid field name for MongoDB (it won’t let us insert fields with a dot in, and we don’t want to override our carefully constructed location field). If so we simply add this key as the field and its the matching v attribute as the value into the map. This effectively copies the OpenStreetMap key/value data into key/value pairs in the MongoDB document so we don’t lose any data, but we also don’t do anything particularly interesting to transform it.

Save Into MongoDB

<img src="; alt="Save Into MongoDB" title="Save Into MongoDB"> Finally, once we’ve created a simple coffeeShop Map representing the document we want to save into MongoDB, we insert it into MongoDB if the map has a field called name. We could have checked this when we were reading the XML and putting it into the map, but it’s actually much easier just to use the pretty Groovy syntax to check for a key called name in coffeeShop.

When we want to insert the Map we need to turn this into a BasicDBObject, the Java Driver’s document type, but this is easily done by calling the constructor that takes a Map. Alternatively, there’s a Groovy syntax which would effectively do the same thing, which you might prefer:

collection.insert(coffeeShop as BasicDBObject) 

Tell MongoDB that we want to perform Geo queries on this data

<img src="; alt="Add Geo Index" title="Add Geo Index"> Because we’re going to do a nearSphere query on this data, we need to add a “2dsphere” index on our location field. We created the location field as GeoJSON, so all we need to do is call createIndex for this field.


So that’s it! Groovy is a nice tool for this sort of script-y thing - not only is it a scripting language, but its built-in support for XML, really nice Map syntax and support for closures makes it the perfect tool for iterating over XML data and transforming it into something that can be inserted into a MongoDB collection.

Converting Blogger to Markdown

I’ve been using Blogger happily for three years or so, since I migrated the blog from LiveJournal and decided to actually invest some time writing. I’m happy with it because I just type stuff into Blogger and It Just Works. I’m happy because I can use my Google credentials to sign in. I’m happy because now I can pretend my two Google+ accounts exist for a purpose, by getting Blogger to automatically share my content there.

A couple of things have been problematic for the whole time I’ve been using it though:

  1. Code looks like crap, no matter what you do.
  2. Pictures are awkwardly jammed in to the prose like a geek mingling at a Marketing event.

The first problem I’ve tried to solve a number of ways, with custom CSS at a blog- and a post- level. I was super happy when I discovered gist, it gave me lovely content highlighting without all the nasty CSS. It’s still not ideal in a blogger world though, as the gist doesn’t appear in your WYSIWYG editor, leading you to all sorts of tricks to try not to accidentally delete it. Also I was too lazy to migrate old code over, so now my blog is a mish-mash of code styles, particular where I changed global CSS mulitple times, leaving old code in a big fat mess. There’s a lesson to be learned there somewhere.

The second problem, photos, I just gave up on. I decided I would end up wasting too much time trying to make the thing look pretty, and I’d never get around to posting anything. So my photos are always dropped randomly into the blogs - it’s better than a whole wall of prose (probably).

But I’ve been happy overall, the main reason being I don’t have to maintain anything, I don’t have to worry about my web server going down, I don’t have versions of a blog platform to maintain, patch, upgrade; I can Just Write.

But last week my boss and my colleague were both on at me to try Hugo, a site generator created by my boss. I was resistent because I do not want to maintain my own blog platform, but then Christian explained how I can write my posts in markdown, use Hugo to generate the content, and then host it github pages. It sounded relatively painless.

I’ve been considering a move to something that supports markdown for a while, for the following reasons:

  1. These days I write at least half of my posts on the plane, so I use TextEdit to write the content, and later paste this into blogger and add formatting. It would be better if I could write markdown to begin with.
  2. Although I’ve always disliked wiki-type syntax for documentation, markdown is actually not despicable, and lets me add simple formatting easily without getting in my way or breaking my flow.

So I spent a few days playing with Hugo to see what it was, how it worked, and whether it was going to help me. I’ve come up with a few observations:

Hugo really is lightning fast. If I add a .md file in the appropriate place, and with the Hugo server running on my local machine it will turn this into real HTML in (almost) less time than it takes for me to refresh the browser on the second monitor. Edits to existing files appear almost instantly, so I can write a post and preview it really easily. It beats the hell out of blogger’s Preview feature, which I always need to use if I’m doing anything other than posting simple prose.

It’s awesome to type my blog in IntelliJ. Do you find yourself trying to use IntelliJ shortcuts in other editors? The two I miss the most when I’m not in IntelliJ are Cmd+Y to delete a line, and Ctrl+Shift+J to bring the next line up. Writing markdown in IntelliJ with my usual shortcuts (and the markdown plugin) is really easy and productive. Plus, of course, you get IntelliJ’s ability to paste from any item in the clipboard history. And I don’t have to worry about those random intervals when blogger tells me it hasn’t saved my content, and I have no idea if I will just lose hours of work.

I now own my own content. It never really occurred to me before that all the effort I’ve put into three years of regular blogging is out there, on some Google servers somewhere, and I don’t have a copy of that material. That’s dumb, that doesn’t reflect how seriously I take my writing. Now I have that content here, on my laptop, and it’s also backed up in Github, both as raw markdown and as generated HTML, and versioned. Massive massive win.

I have more control over how things are rendered, and I can customise the display much more. This has drawbacks though too, as it’s exactly this freedom-to-play that I worry will distract me from actual writing.

As with every project that’s worth trying, it wasn’t completely without pain. I followed the (surprisingly excellent) documentation, as well as these guidelines, but I did run into some fiddly bits:

  1. I couldn’t quite get my head around the difference between my Hugo project code and my actual site content to begin with: how to put them into source control and how to get my site on github pages. I’ve ended up with two projects on github, even though the generated code is technically a subtree of the Hugo project. I think I’m happy with that.
  2. I’m not really sure about the difference between tags, keywords, and topics, if I’m honest. Maybe this is something I’ll grow into.
  3. I really need to spend some time on the layout and design, I don’t want to simply rip off Steve’s original layout. Plus there are things I would like to have on the main page which are missing.
  4. I needed to convert my old content to the new format
  5. Final migration from old to new (incomplete)

To address the last point first, I’m not sure yet if I will take the plunge and do full redirection from Blogger to the new github pages site (and redirect my domains too), for a while I’m going to run both in parallel and see how I feel.

As for the fourth point, I didn’t find a tool for migrating Blogger blogs into markdown that didn’t require me to install some other tool or language, and there was nothing that was specifically Hugo-shaped, so I surprised myself and did what every programmer would - I wrote my own. Surprising because I’m not normally that sort of person - I like to use tools that other people have written, I like things that Just Work, I spend all my time coding for my job so I can’t be bothered to devote extra time to it. But my recent experiences with Groovy had convinced me that I could write a simple Groovy parser that would take my exported blog (in Atom XML format) and turn it into a series of markdown files. And I was right, I could. So I’ve created a new github project, atom-to-hugo. It’s very rough, but a) it works and b) it even has tests. And documentation.

I don’t know what’s come over me lately, I’ve been a creative, coding machine.

In summary, I’m pretty happy with the new way of working, but it’s going to take me a while to get used to it and decide if it’s the way I want to go. At the very least, I now have my Blogger content as something markdown-ish.

But there are a couple of things I miss about Blogger:

  1. I actually like the way it shows the blog archive on the right hand side, split into months and years. I use that to motivate me to blog more if a month looks kinda empty
  2. While Google Analytics is definitely more powerful than the simple blogger analytics, I find them an easier way to get a quick insight into whether people are reading the blog, and which paths they take to find it.

I don’t think either of these are showstoppers, I should be able to work around both of them.

Getting started with the MongoDB Java Driver Tutorial

Brief guide to running the MongoDB tutorial from QCon London and JAX London.


Create a new work area for this tutorial. For the rest of these instructions I’ll refer to it as <location>. I’ve put mine in ~/Documents/workshops/jax

Installing MongoDB

Download MongoDB or get it off the USB stick Unzip to an appropriate location, let’s say <location>/mongodb

Then we’ll have to create the directory for the data to go into:

cd <location>
mkdir data

And then start MongoDB:

./mongodb/bin/mongod –dbpath data

Now MongoDB should be running on localhost and port 27017

If you want, you can connect to the shell - this is not necessary for this workshop:

<span style="font-family: monospace;">./mongodb/bin/mongo

Creating your project

Put the Java project into <location>/java3.0

cd < location>/java3.0

./gradlew idea


./gradlew eclipse

Open in your favourite IDE and you should be ready to start playing.

More help:

MongoDB Tutorials

Emergency Gradle Procedure:

Download Gradle or get it off the USB stick
Extract to some suitable location.  Mine’s in /Library/Tools/gradle/
Put gradle on your path.  On the Mac, that means adding the following line to ~/.bash_profile:
export PATH=”/Library/Tools/gradle/bin:$PATH"

Why Java developers hate .NET

I have been struggling with .NET.  Actually, I have been fighting pitched battles with it.

All I want to do is take our existing Java client example code and write an equivalent in C#.  Easy, right?

Trisha’s Guide to Converting Java to C#

Turns out writing the actual C# is relatively straightforward.  Putting to one side the question of writing optimal code (these are very basic samples after all), to get the examples to compile and run was a simple process:

1. Find-and-replace the following (only you can’t use Ctrl+R like I expect.  Sigh.)

final = readonly (but remove from method params)
System.out.printf = Console.WriteLine
Map = Dictionary
BigDecimal = decimal
Set… oh.  I have no idea.

2. When using callbacks, replace anonymous inner classes with delegates


something.doSomething(new SomethingRequest(),
new SomethingCallBack()
public void onSuccess()
System.out.println(“Action successful”);

public void onFailure()
System.err.println(“Action failed”);


private void foo ()
_something.DoSomething(new SomethingRequest(),

private void SomethingSuccess()
Console.WriteLine(“Action successful”);

private void SomethingFailure()
Console.Error.WriteLine(“Action failed”);

I rather like this pattern actually. You can’t really tell in the noddy example above, but the C# code is generally shorter and more reusable.

3. Replace getters and setters with properties


private class MyClass
private BigDecimal myField = new BigDecimal("-1.0”);

public BigDecimal getMyField()
return myField;

public void setMyField(BigDecimal instructionId)
this.myField = myField


internal class MyClass
public decimal MyField{ get; set; }

What the… where did all my code go??

My Thoughts

I was pleasantly surprised with the language. In general, for what I was doing, the equivalent C# was a lot less code. The fact that the syntax is not wildly different from Java made the transition relatively easy, even if I don’t get all the nuances.

I didn’t really like the enums - I can see what purpose they serve, but I quite like the way the Java ones are pretty much classes in their own right with properties of their own - it allows you to encapsulate some of your simplest domain objects. But it’s a minor point, not a deal-breaker.

The C# capitalisation makes me queasy though. I just can’t get my head around it. In Java, if I say:

I know the class is Bar and the rest is the package (or namespace, or whatever). This is more useful when you’re using static methods and so forth:

In C#, I know the thing at the end is a method and the thing before that is a class, but it doesn’t jump out at me:


And if you’re using a property:


The whole thing makes me dizzy.

You could argue that all this is redundant with modern tools and IDEs doing all the heavy lifting for you - nice colourisation etc.

Which brings me to The Rant.

Oh My Dear God What Is Wrong With Visual Studio?

C# needs to be a shorter, more succinct language because it takes three billion times longer to do anything in Visual bloody Studio.

I’m coming at this from a Java, IntelliJ point of view, so there’s always the possibility it might be lack of familiarity with the tool, rather than the tool itself, which is the problem.  It’s a long time since I used VS, and that was back in the 90s when I was doing ASP and COM (shhh, don’t tell anyone).

But things shouldn’t be this hard.  I was ready to accept, due to my newbie status, the IDE not helping me. I didn’t expect it to actively hinder me.

For example: it took hours of my life that I will never get back to discover that you can’t simply run a class that has a main method by right-clicking and selecting “run” (note: I’m not even trying Ctrl+F10).  No.  I have to select, at a Solution level, which Projects are runnable.  Then I have to select at the Project level the class with the main method I actually want to run.  Then, it opens up a terminal window and runs it in there, which promptly disappears when the program errors or finishes.  What’s wrong with outputting in the output window of the IDE? Is that not what it’s for?

Finally I worked out how to run the cursed program (seriously, like that’s not the first thing everyone wants to do?  How do people write “Hello World”?).  Now, where are the command line arguments?  Of course, they’re at the project properties level too, because each project only has one entry point.  I seriously had to Google that too because I couldn’t work it out from the IDE alone.

The next day, my ReSharper licence had expired.  I decided I should attempt to limp on without it, after all hundreds of developers must be surviving with just Visual Studio.  How bad could it be?

Very bad, it turns out.

All those helpful little squiggles I was leaning heavily on to convert my Java to C#?

  • The red to tell you you’re utterly wrong.
  • The orange to tell you could be using less code.
  • The blue to remind you to stop thinking in Java and helpfully offer corrected naming.
  • The green to suggest stuff that C# could do differently.

Yeah, all gone.

How do people code like this?  Do they really just do a build to check if it’s all wrong or not?

Next, I try to find a class.  I actually have no idea how to do this, because I can’t use Ctrl+N.  So I Ctrl

I realise this is a waste of time anyway, because one thing that really annoys me about Visual Studio is that I can’t find a way to sync the project tree to the class file I’m looking at.  I can’t get it to jump to highlight where I am.  When I’m using IntelliJ, I find this dead useful when I want to see other stuff in the same package.

Less than ten minutes after attempting to use Visual Studio without ReSharper, I’ve abandoned the fight and tracked down a licence and installed it.

Documentation isn’t a standard function of .NET?

What sort of message does this give developers?  Documentation isn’t important?

I always thought Javadoc was pretty ugly and clunky. In addition, now our IDEs generate so much, it’s frequently meaningless.  But it is generated by standard Java tools, and HTML is a standard format that can be read on pretty much any computer with any operating system.

I could not believe how hard it was to get the XML comments out of the C# into something the user can actually read.  Thank goodness, some enterprising member of the team had already done that for us.  All I needed to do was hack/crowbar the tutorial I’d been working on into the generated documentation, so it ended up in the Windows help files in some fashion.

I know there’s a way to get HTML/XML files into the end result using Sandcastle, but hours of Googling only told me it was possible, not how to do it.  I still have no idea what the correct question is to ask to find the solution.

Right now, this is an unsolved mystery.  Our .NET client users will have to read the plain HTML I’m afraid.

In Conclusion

Are we lowly Java developers spoilt with our shiny IDEs?

Or is there such a fundamentally different approach to development for .NET people that all the functionality is there, I just can’t find it?

I’m disappointed if I’m honest.  I’m sure the .NET camp used to tout their tools as their superiority.  I ended up feeling sorry for the poor NET people.  Is there anything they can use that isn’t Visual Studio?

In Conclusion: Despite the nasty capitalisation I found myself surprisingly taken with C#.  But until they can give me a proper development environment, I won’t be tempted by the dark side any time soon.

Validation with Spring Modules Validation

So if java generics slightly disappointed me lately, what have I found cool?

I’m currently working on a web application using Spring MVC, which probably doesn’t come as a big surprise, it seems to be all the rage these days. Since this is my baby, I got to call the shots as to a lot of the framework choices. When it came to looking at implementing validation, I refused to believe I’d have to go through the primitive process of looking at all the values on the request and deciding if they pass muster, with some huge if statement. Even with Spring’s rather marvelous binding and validation mechanisms to take the worst of the tasks off you, it still looked like it would be a bit of a chore. Given all the cool things you can do with AOP etc I figured someone somewhere must’ve implemented an annotations-based validation plugin for Spring.

And they have. And there’s actually a reasonable amount of information about how to set it up and get it working. The problem is that it’s pretty flexible and has a lot of different options, so when you are running Java 1.5 and Spring 2.0, and actually want to use the validation in a simple, straightfoward fashion, the setup instructions get lost.

So here’s my record so I don’t forget in future how I did it.

As a brief summary for those who may not be familiar with Spring, or for those who need reminding (no doubt me in a few months when I’ve completely forgetten what I was working on), Spring provides a Validator interface that you can use to easily plug validation into your application. In the context of web applications, you create your various Validators and in your application context XML file you tell your Controllers to use those validators on form submission (for example).

Spring Modules validation provides a bunch of generic validation out of the box for all the tedious, standard stuff - length validation, mandatory fields, valid e-mail addresses etc (details here). And you can plug this straight into your application by using annotations. How? Easy.

This is my outline application context file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
xmlns_xsi=" "
xmlns_vld=" "

<vld:annotation-based-validator id="validator" />

<!– Load messages –>
<bean id="messageSource"
<property name="basenames" value="messages,errors" />

<!– Bean initialisation for validation. You can put these explicitly into your controllers or set to autowire by name or type –>
<bean id="messageCodesResolver"
class="org.springmodules.validation.bean.converter.ModelAwareMessageCodesResolver" />


Really all you’re interested in is the addition of the validation namespace and schema at the top of the file, and the <vld:annotation-based-validator id="validator" /> line which is your actual validator. The other sections are a message source so your error codes can have meaningful messages and a MessageCodeResolver to make use of these.

Eclipse does seem to moan about the way the springmodules schema is referenced, but when you actually start Tomcat up it seems happy enough.

I’ve chosen to give the validator the ID validator because I turned autowire by name on so that all my controllers picked up this validation by default. Note: autowire can be a little bit dangerous and I’ve actually turned it off now because I had a validator bean and a validators list in the context file and my poor SimpleFormController controllers were getting a bit confused over which one to use (in truth, the single validator was overwriting the list, which was not what I was after at all).

Anyway. Now what? We have a validator and we’ve probably wired it into the relevant controllers, either by autowiring them or poking it specifically into our controllers like this:

<bean id="somePersonController"
<property name="commandClass"
value="com.mechanitis.examples.validation.command.PersonCommand" />
<property name="formView" value="person" />
<property name="successView" value="success" />
<property name="validator" ref="validator"/>

Next step is to add some validation rules. The documentation will show you how to do this using an XML file, which you’re perfectly welcome to do. However what I wanted to show is how to use annotations on your command object to declare your validation. So here you are:

import org.springmodules.validation.bean.conf.loader.annotation.handler.CascadeValidation;
import org.springmodules.validation.bean.conf.loader.annotation.handler.Email;
import org.springmodules.validation.bean.conf.loader.annotation.handler.Length;
import org.springmodules.validation.bean.conf.loader.annotation.handler.Min;
import org.springmodules.validation.bean.conf.loader.annotation.handler.NotNull;

public class PersonCommand {
private static final int NAME_MAX_LENGTH = 50;

@Length(min = 1, max = NAME_MAX_LENGTH)
private String name;

private Long age;

private String eMail;

private RelationshipCommand relationship = new RelationshipCommand();

private String action;

//insert getters and setters etc


Note that @CascadeValidation tells the validator to run validation on the enclosed secondary Command.

This is just a simple example obviously. But hopefully you can see that now you’ve got the validator set up correctly in your application context file, all you need to cover 90% of your validation needs is to tag the relevant fields with the type of validation you want. If you want to get really clever, the validator supports Valang which allows you to write simple rules. For example, if I only want to validate the name when I’m saving the person rather than passing the command around for some other purpose, I might change the annotations on the name field:

@NotNull(applyIf="action EQUALS ‘savePerson’")
@Length(min = 1, max = NAME_MAX_LENGTH, applyIf="action EQUALS ‘savePerson’")
private String name;

That’s the basics. Before I let you go off and play though, a word about error messages. As usual with Spring validators, you can specify pretty messages to be displayed to the user when things go wrong. In my application context file above you should see that I’ve specified a properties file called errors. In this file you can map your error codes to the message to display. When using the spring modules validation I found the error codes generated were like the ones below so you might have an file that looks like this:

# *** Errors for the Person screens
PersonCommand.age[min]=An age should be entered[length]=Person name should be between 1 and 50 characters
# etc etc

# *** General errors
not.null=This field cannot be empty

Go play.