Java Spark Framework Tutorial

Some friends and I want build a large project in Java beginning early next year so in the meantime we all need to brush up on web development with Java. Although I’ve used Spring(which is awesome by the way) I wanted to explore some alternatives for building some lightweight MVC apps that can be rapidly developed without having to worry about the overhead of learning the Spring framework in depth. Thus I discovered Spark; a micro framework for Java.

Setting up Java for web development

Before you begin download and install Git

git clone .
  • In order to follow along with each step of the tutorial we’ll want to be able to pull down remote git branches into our local repo
git fetch origin
git branch -a

Downloading the JRE and JDK

  • Check to see if you already have Java 7 installed on your machine by opening either terminal or command prompt and typing java –version

java version

If you see a similiar result then you can skip this section and go to “Configuring your IDE” since you already have a working Java runtime.
  • Go to Java downloads page and follow the instructions click here to dowload Java

  • If you’re still having trouble try adding the Java install directory to your environment HOME path or just Google how to install Java 7

Configuring your IDE

As for the choice of IDE, we have quite a few but I prefer to use IntelliJ IDEA by JetBrains. It supports Java and many other programming languages and it has superb documentation and a large community of developers who write amazing plugins.

Get Hello World running and configure Maven

Assuming you chose to install the community edition of IntelliJ IDEA we’ll get Hello World running just to make sure you have your Java Runtime and SDK setup correctly.

  • Open IntelliJ

  • From the main menu click the “Configure” icon, we are going to check to see if we have “Maven” configured as it will be used later.

create project
  • On the configure screen click “Plugins”

  • Verify that Maven is checked along with the Maven Integration Extension

maven checked

Install Maven onto your machine

Apache Maven is wonderfully complex and powerful tool used by Java developers for everything from build automation to project package installation and versioning to running your JUnit tests. For the most part we’ll be using it as a package manager similar to Ruby gem, PHP composer and or C# NuGet.

Setup Maven for IntelliJ

  • So far so good, now go back to the IntelliJ home screen click “Create New Project”

  • On create project screen you might see a bunch of options on the leftmost sidebar but the one we want to select is Maven Module under the “Java” heading

  • IntelliJ will then create a new project for us alredy setup for using Maven to manage our external dependencies.

new maven project
  • Once the new project has been created I named mine “Sparkle” figure out how to open “Project Settings” for your project, you should see “Maven” from the list of Settings and your screen will resemble the images below.

  • For Windows users you’ll want to set your “M2 HOME” by adding the Maven install folder path to the system environment variables. You’ll then not have to explicitly set a M2 Home from the settings screen

windows maven settings
  • For Linux users it will be some variation of /usr/share or whatever your distribution did with the Maven install but the below is my setup for Ubuntu 12

linux maven settings
  • On Mac OSX it will probably be very similar to Linux

Do not think for one second that just because IntelliJ can manage your Maven that you shouldn’t learn how to use the Maven command line interface.
The Maven cli is pretty robust and is what your IDE calls in the background anyways so don’t be lazy… actually I take that back… be lazy

Hello World…

git checkout -b hello_step_1 origin/hello_step_1
  • Now that we’ve setup Maven we’re just going to create a quick hello world program. From the Project sidebar click src -> main and then right click the “java” folder

  • Create a new class file and name it “HelloSpark”

  • Now enter the code

  • Next from the top toolbar click Build -> Make Project

  • Then right next to Build on the toolbar click Run -> Run ‘Hello Spark’

  • If you don’t get something like shown in the image below then you probably misconfigured your Java or IntelliJ settings. Please seek advanced troubleshooting on StackOverflow or the IntelliJ documentation.

Hello World

Running the Spark demo app

git checkout -b spark_demo_step_1 origin/spark_demo_step_1
  • Lets begin by modifying our Hello world class file to use the Spark framework so we can get started with Java web development

Spark A Java MVC micro framework
  • Maven allows us to include external dependencies within our projects via the pom.xml file. So open up the pom.xml file and add the dependency for Spark.
  • From Intellij when you make a change to a Maven pom.xml file you can set it to “enable auto-import” so it refreshes your Maven dependencies when you update your pom.xml
git checkout -b spark_demo_step_2 origin/spark_demo_step_2
  • Next open up and remove all the existing code… replace it with the snippet below

Just in case you were curious you’ll note that the Spark documentation uses “import static” so here is a brief explaination of import static. In short you can uses a class’ static methods without explicitly typing the classname; beware of its pitfalls though.
  • Now from within IntelliJ click Run -> “run ‘HelloSpark’ from the top menu, the code will startup and then it will let you know that Spark is currently running on some port most likely localhost:4567

  • Launch a new web browser window and goto http://localhost:4567/hello

  • Congrats!!! You are now a super web developer!

So how does that work? Like many MVC applications Spark provides us a basic router to let our app respond to HTTP requests. Of the four most commonly used are GET PUT POST DELETE. Those four HTTP request types when used in conjunction with the HTTP Header(s) for the request such as Content-type: application/json and or application/x-www-form-urlencoded allow us to capture and handle all sorts of browser request. For a good introduction to HTTP and REST see the article on net.tutsplus
  • For some more fun play around with some of the basic features you can do with Route such as capturing user supplied parameters and or adding new routes
git checkout -b spark_demo_step_3 origin/spark_demo_step_3
  • Next lets introduce the POST request. We’re going to use POST to store some data and then display it as a list. This example is very crude and will help us segway into mini blog tutorial further on in the article.
git checkout -b spark_demo_step_4 origin/spark_demo_step_4
  • In the snippet of code below we use a POST request on the route /add/:item to add things to our list and then use GET on the route /list to display them
  • So update your file, press build and then run the code. Launch your web browser and goto http://localhost:4567/list

  • You should be greeted by our message “Try adding some things to your list”

  • Now you might be tempted to try navigating to http://localhost:4567/add/bananas or something

  • When we visit urls from our web browser we by default use the GET request so calls the GET request on some google webserver somewhere.

If you’re puzzled as to why you hit a 404 page when we clearly defined a POST route to /add/ you’ve just discovered that our application will only route POST request to a post handler method. To fix this we should actually send a HTTP POST request instead of using GET.
  • To send a POST request open a terminal window and use curl or if you’re on a windows machine use PowerShell yes I said PowerShell please stop using command prompt
curl -X POST http://localhost:4567/add/apples
Invoke-RestMethod -Uri http://localhost:4567/add/apples -Method POST

CRUD Example: A Blog

Within this section we’ll be creating a basic blog application that will eventually grow more complex as we add more features. Its important to start off slow so the first iteration of the blog will be very concise and perform just the bare minium in order to function. Being a CRUD app each aspect of CRUD will be explored.
  • Before we start our blog application will need an object representation of an Article. Our article will have title, summary and content for now. is just plain old Java so there really isn’t much to get excited about; the MVC web stuff will follow.

  • Within the same package as create the file

Project structure with


  • Return to your code and delete everything making sure you’re starting off with a clean slate
git checkout -b spark_blog_step_1 origin/spark_blog_step_1
  • We’ll begin by importing all of the necessary files which include the spark library and the two java.util classes

  • As a blog our objective is to Create, Read, Update and Delete new articles which are just bodies of text which we’ll also assign a unique identification number and a timestamp of the date when we created the article.

  • When a user hits the root index of the blog we should show the list of articles written ordered by their date of creation else a message that indicates no articles have yet been added. To accomplish this we’ll add a conditional statment and create a StringBuilder object to render some HTML
  • In order to publish articles we need a way to create them and submit the information to our server side code. Add another GET method which will handle requests made to /article/create

  • On the page is a form which accepts a new title, summary and content for the new blog article

  • Right now you may restart the Spark app and note that by clicking the “Write Article” link you are sent over to the form we created…

  • However when you click the “Publish” button nothing happens; in order to fix that we need write a method to handle the POST request called from /article/create

  • We’ll want to persist the article to our storage on the server side code by capturing the form elements article-title, article-summary and article-content


git checkout -b spark_blog_step_2 origin/spark_blog_step_2

The next part of CRUD is actually the easist since it doesn’t actually involve modifying data. To do so we’ll use the read article link associated with every Article object and use the unique id number of the article to pull its information from our storage when the user requests a GET /article/read/:id from our server
  • To read an article is very simple, just use a for loop until we find the ID of the article. Of course using a straight up iterative search is horrific for very large numbers of articles but we’ll look at alternative data persistance later on in this post.


git checkout -b spark_blog_step_3 origin/spark_blog_step_3
  • When updating an existing article all we need to do is possibly overwrite the found content, so add a new Route for /article/update/:id

  • The code behind the /article/update/:id will use the same form as the /article/create except the form fields will be pre-populated

  • Now all that is left is to add the POST handler for our update form


git checkout -b spark_blog_step_4 origin/spark_blog_step_4

Along with Read Delete is another rather simple action since it only requires a single method along with a redirect
However if you recall back to when we created the Article Model, we had a boolean value called deleted. In this sense any deleted articles are basically marked as deleted and not shown to the UI. Later on when we explore different types of persistence we’ll actually delete articles for good but for now this will have to suffice.
  • Within your file add the method to handle the delete action /article/delete/:id
  • Lastly we need to go back and edit our Blog homepage to hide deleted articles

Putting the V in MVC

blog bootstrap3

Recall how in the previous code all of our views or HTML code was simply shoved into our controller routes… this won’t work in actual practice and is in fact not a very sane way to structure code. Thus we’ll soon work out a method to deal with the fact that our core logic should be separated from what our clients view. This idea is what brings us to the VIEW portion of Model View Controller.
  • In this section we will be using a very powerful Java templating engine called Freemarker which will allow us to separate our Controller logic from our View layer

  • Lucky for us the author of the Spark framework has already created a library spark-template-freemarker which provides an interface to using the Freemarker template engine. So open up your pom.xml file and add the following dependency.

However before we start demonstrating the power of a well made html templating engine its important to not let your templated HTML get out of hand; to the point where your templated HTML substitues for your entire application. A famous blog post titled Your templating engine sucks and everything you have ever written is spaghetti code takes a critical look at how easy it is to completely and utterly abuse the living crap out of your code by overusing template engines until all of your code basically becomes PHP4… and lets not go back to those days. For the TL;DR the author basically says to avoid heavy use of conditionals and or functions/macros within your templated HTML; think of it as when developing a Java application and how you rarely ever want to manually invoke the garbage collector.
  • Lets keep moving… for now we’ll add a test Route to our application before we go back and refactor the blog code to remove the messy string injected html.
git checkout -b spark_view_step_1 origin/spark_view_step_1
  • In the code below we create a HashMap which will map our Java objects to variables which can be called directly from our View templated HTML files

  • The HashMap elements blogTitle, descriptionTitle, and the two descriptionBody will be reffered to within our freemarker templates and appear exactly as they do within the file.

  • Next within your IntelliJ project directory create the folder structure beginning with the resources directory resources/spark/template/freemarker

  • Once that is done right click on the newly created directory adn add the file “layout.ftl” the naming here is important since we will be discussing a common pattern in MVC which is to split your Views between layouts and templates.

Layouts are like view container which hold multiple templates. Take my blog for example; it uses a layout which holds the top navigation bar and the disqus comments in the footer and swaps out article templates for each of my blog posts. Intelligent use of templates and layouts means that we can inject different views to our clients depening on the data sent to the view from the controller.
  • This is not a tutorial on HTML and CSS so for now lets just assume the HTML code is correct.

  • Anyhow the code below is for the file layout.ftl notice where we inject the Java variables we sent to the view using the ${some_variable_name_here} syntax. Don’t forget to checkout the documentation for Freemarker or Google for some Freemarker tutorials if you are confused.

  • By the way don’t forget to experiment with Freemarker. Try passing serveral variable to the ftl file and get the hang of templating; its a popular technique that is used in many different programming languages including the Javascript Framework AngularJS

View templates and layouts

git checkout -b spark_view_step_2 origin/spark_view_step_2
  • Given the new Bootstrap 3 powered homepage we just completed, lets now go back and refactor our old code to move the HTML injected strings out of our Controllers and into proper HTML files.

  • Create a new file called articleList.ftl or just edit the existing one and place it within the directory sparkle/src/main/resources/spark/template/freemarker/articleList.ftl

  • Now open up and GET method for the “/“ url and change it to use the FreeMarkerRoute instead of the regular route. For our refactor we’re going to create a HashMap to store the Java Objects we wish to pass onto the view articleList.ftl file.

Finnally no more creating String objects to hold our HTML!
  • With our Controller updated to use the layout.ftl file we now need to update our layout.ftl with the new values provided by the viewObject HashMap

  • Layouts allow use to embed child HTML pages within them so pay attention to the code where we inject a template called articleList.ftl via the “include” Freemarker tag. This allows us to separate our head and navigation links from our article View markup.

  • Finnally we’ll create another Freemaker template file within the same directory as the layout.ftl called “articleList.ftl”

Look at the code snippet for the articleList.ftl file. Pay special attention to how templating engines such as FreeMarker allow us to use conditional statements and loop over enumerable objects such as Arrays and HashMaps. However if you remember the article about why your templating engine sucks then you should agree that conditionals and loops are about all our templating engine should be responsible for… more complex logic should stay server-side within the respective Controller.
git checkout -b spark_view_step_3 origin/spark_view_step_3
  • Next we’re going to redo the write article form, so open up and edit the route /article/create
  • Notice how simple our form view code became within the controller code because all of our view specific code will now be placed within an actual HTML file instead of nasty string appened spaghetti code

  • Create the freemarker template file called articleForm.ftl

git checkout -b spark_view_step_4 origin/spark_view_step_4
  • Lastly we need to update the views for editing and reading articles, so open up and change the following routes within your controller code until it matches.
  • First we’re going to add a new freemarker template file called articleRead.ftl
  • Lastly we need to re-use our existing articleForm.ftl but when the user chooses to update an article we need to populate the form with the article content to be edited

  • This can be accomplished by using Freemarker conditionals to check for an existing article and if found then place its attritbutes within the form fields

Some other persistence options

In this section we’re going to explore different ways for storing our application data other than stricly within the memory of our java servlet. Being a more pragmatic developer I’ve chosen to stick with three of the more popular database models (Relational, Document Store and Key-value) for educational purposes.

While you’re at it be sure to at least learn a bit about alternative Database Models such as wide-column stores, search optimized databases, graph databases and lesser known db models such as content stores.

Relational Storage via Postgres

There has always been that lingering question in the open-source community about the pros and cons of MySQL vs Postgres(or PostgreSQL but I prefer it by the street name) but over the years Postgres has finnally caught up in terms of features and performance (probably due to Heroku and AWS but thats another topic). As for myself I’ve never used Postgres before due to all of my professional work using MySQL on a LAMP stack but its always fun to learn new things.

An article on the rise of Postgres

Getting started with PostgreSQL

  • Ok lets start, go to

  • I’m using Ubuntu Linux but choose whatever platform you need; follow the instructions and continue reading this article when you have Postgres installed.

sudo apt-get install postgresql
  • Lets verify that the postgres installer worked type into your console
which psql
  • In order to start the postgres command line interface use the command

If you encounter the error message psql: FATAL: role “$USER” does not exist then you probably need to run postgresql as the postgre admin; To do that just run the following command.
sudo -u postgres psql
  • You should get familiar with the postgres command line interface(cli) before continuing; it differs a bit from other database systems in that many admin features are separate terminal commands which are ran from outside the cli
git checkout -b spark_storage_step_1 origin/spark_storage_step_1
  • In order for us to begin using Postgres update our pom.xml file and add a new dependency.

  • From the command line run the following to create the database which we’ll be using

sudo -u postgres createdb sparkledb
  • The first thing we’ll want to do is build a service class to interact with each of our database types, within your project’s java directory create the file

A Short Intermission: Refactoring The Servlet In-Memory Storage Into A Data Access Object Class

  • The file we just created serves as the interface to the persistence storages we will implement; we are currently setup to use a java ArrayList for our storage this will be moved into a new file.

  • Create the file

  • The ArticleServletDao implements the ArticleDbService methods and as such will allow use to swap it out as the current implmentation for the ArticleDbService object within our Blog application.

  • Now lets go back and refactor the to use the ArticleDbService. The changes are mentioned within the comments so read them so you understand; notice how much simpler our Route methods have become since we moved the DataAccess logic to its own service(s).

Building a PostgreSQL DAO for our Blog

git checkout -b spark_storage_step_2 origin/spark_storage_step_2
  • Before we can connect to the database from the Java class we created we need to set a password for the postgres user.
sudo -u postgres psql
  • Then from the psql command line interface set the password, to keep the example simple lets use the same name as the password(that way the Java code won’t fail to connect)
alter user postgres password postgres;
  • Our Data Access Object(DAO) class for Postgres should be created as a new file named

  • The ArticlePostgreDao SQL code is pretty straight forward as far as the SQL goes since the queries are fairly basic(there are no complex JOINs and or temp tables) so I’ve left some helpful comments throughout the file.

  • One more quick refactor… add the following constructor to which is used by the ArticlePostgresDao.
  • To use the ArticlePostgresDao just swap its name in place of the ArticleServletDao within the file I hope you’re starting the see the power of the infamous design pattern program to interfaces, not implementations.

Document Storage with MongoDB

git checkout -b spark_storage_step_3 origin/spark_storage_step_3
  • Update your pom.xml to include the MongoDB Java Driver
  • The key aspect to using Document stores versus say a traditional relational DB is that we can (optionally)forego schema design and just store our data as freeform documents.

  • Begin by downloading and installing MongoDB, it shouldn’t be too much of a hassle. If you get stuck read the manual

  • Now that we’re done installing MongoDB create a new file within your src/main/java directory called

  • We’re not even skimming the surface as to what MongoDB is fully capable of due to the simplicity of this application but I imagine you’re starting to picture the flexibility one obtains by removing the constraint of a rigid schema from the underlying DAO.

Homework: Create a checklist application

  • Use the examples presented in the tutorial to write a Java web app using Spark to let a user create, read, update and delete daily tasks.
Share Comments