Tuesday, June 23, 2015

Problems with Cobertura and Sonar 5.1

Recently, I was having some bother trying to use Sonar 5.1 with my Grails 2.4.4 project. I was using the usual Groovy stuff: Gmetrics, Codenarc and Cobertura. For the Sonar database I was using Postgres 9.4.

The logfile for the Sonar runner just gave me this:

build 22-Jun-2015 07:44:30 INFO: ------------------------------------------------------------------------
build 22-Jun-2015 07:44:30 INFO: EXECUTION FAILURE
build 22-Jun-2015 07:44:30 INFO: ------------------------------------------------------------------------
build 22-Jun-2015 07:44:30 Total time: 9.153s
build 22-Jun-2015 07:44:30 Final Memory: 30M/1039M
build 22-Jun-2015 07:44:30 INFO: ------------------------------------------------------------------------
error 22-Jun-2015 07:44:30 ERROR: Error during Sonar runner execution
error 22-Jun-2015 07:44:30 ERROR: Unable to execute Sonar
error 22-Jun-2015 07:44:30 ERROR: Caused by: Unable to save file sources
error 22-Jun-2015 07:44:30 ERROR: Caused by: -1
Not much use! I thought there was some permission problem, since "Unable to save file sources" usually means that! But there were no permission issues. I then disabled the Cobertura part of the analysis and things were ok, so it was something wrong with the Cobertura part. I then:
  • enabled verbose logging -- sonar.verbose=true
  • enabled full stack trace logging -- using the -e switch
  • enabled full debug logging with the -- using the -X switch
this provided a few more clues.
error 22-Jun-2015 11:09:06 ERROR: Error during Sonar runner execution
build 22-Jun-2015 11:09:06 INFO: ------------------------------------------------------------------------
error 22-Jun-2015 11:09:06 org.sonar.runner.impl.RunnerException: Unable to execute Sonar
error 22-Jun-2015 11:09:06  at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
error 22-Jun-2015 11:09:06  at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
error 22-Jun-2015 11:09:06  at java.security.AccessController.doPrivileged(Native Method)
error 22-Jun-2015 11:09:06  at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
error 22-Jun-2015 11:09:06  at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
error 22-Jun-2015 11:09:06  at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:102)
error 22-Jun-2015 11:09:06  at org.sonar.runner.api.Runner.execute(Runner.java:100)
error 22-Jun-2015 11:09:06  at org.sonar.runner.Main.executeTask(Main.java:70)
error 22-Jun-2015 11:09:06  at org.sonar.runner.Main.execute(Main.java:59)
error 22-Jun-2015 11:09:06  at org.sonar.runner.Main.main(Main.java:53)
error 22-Jun-2015 11:09:06 Caused by: java.lang.IllegalStateException: Unable to save file sources
error 22-Jun-2015 11:09:06  at org.sonar.batch.index.SourcePersister.persist(SourcePersister.java:84)
error 22-Jun-2015 11:09:06  at org.sonar.batch.phases.DatabaseModePhaseExecutor.executePersisters(DatabaseModePhaseExecutor.java:165)
error 22-Jun-2015 11:09:06  at org.sonar.batch.phases.DatabaseModePhaseExecutor.execute(DatabaseModePhaseExecutor.java:133)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:264)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:235)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:230)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:220)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:57)
error 22-Jun-2015 11:09:06  at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:45)
error 22-Jun-2015 11:09:06  at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:135)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
error 22-Jun-2015 11:09:06  at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
error 22-Jun-2015 11:09:06  at org.sonar.batch.bootstrap.GlobalContainer.executeTask(GlobalContainer.java:158)
error 22-Jun-2015 11:09:06  at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:95)
error 22-Jun-2015 11:09:06  at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:67)
error 22-Jun-2015 11:09:06  at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:48)
error 22-Jun-2015 11:09:06  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
error 22-Jun-2015 11:09:06  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
error 22-Jun-2015 11:09:06  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
error 22-Jun-2015 11:09:06  at java.lang.reflect.Method.invoke(Method.java:606)
error 22-Jun-2015 11:09:06  at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
error 22-Jun-2015 11:09:06  ... 9 more
error 22-Jun-2015 11:09:06 Caused by: java.lang.ArrayIndexOutOfBoundsException: -1
error 22-Jun-2015 11:09:06  at java.util.ArrayList.elementData(ArrayList.java:371)
error 22-Jun-2015 11:09:06  at java.util.ArrayList.get(ArrayList.java:384)
error 22-Jun-2015 11:09:06  at com.google.protobuf.RepeatedFieldBuilder.getBuilder(RepeatedFieldBuilder.java:245)
error 22-Jun-2015 11:09:06  at org.sonar.server.source.db.FileSourceDb$Data$Builder.getLinesBuilder(FileSourceDb.java:2911)
error 22-Jun-2015 11:09:06  at org.sonar.batch.index.SourceDataFactory.
Now, I could see earlier in the log, that the Cobertura analysis had finished. I could also see that the Cobertura coverage.xml generated ok (this is the file which collates the code coverage info). The next step after creating the coverage.xml file was for the sonar runner to parse it and send a request to Postgres, something had to going wrong at the parsing stage since connecting to Postgres was definitely not an issue (remember everything fine when Cobertura disabled). I knew there was no problem sending the request to Postgres, so thought there must be something odd in the coverage.xml file which meant Sonar runner failed to parse it. As stated, the coverage.xml file details what line number for each class has and hasn't been covered. Sample:

    
        
             
                 
             
       
       ...

...
So what kind of things could make the parsing barf? What about if there was some odd line number in the coverage.xml file? hmmm... To check this, I ran the following grep:
> grep "line number" coverage.xml
This gave too much. What about any negative line numbers?
>grep "line number=\"\-" coverage.xml
Nope, none. Ok go back to exception, look at this line:
java.lang.ArrayIndexOutOfBoundsException: -1
hmmm... If a line number was 0, I wonder could it make some array parsing in the sonar runner throw index out of bounds?
>grep "line number=\"0" coverage.xml
Hit! Time to grep lines before and after and get more info about this file.
>grep -C20 "line number=\"0" coverage.xml
This gave me the culprit. It made no sense to me why Cobertura was saying that linenumber 0 had 0 hits. It was still possible to open the Cobertura html report and view the analysis. Sonar was just barfing when it was parsing it. So I removed this file from Cobertura analysis by adding the following to my build config.
coverage {
    xml = true
    exclusions = [
        "**/com/dublintech/me/MyOddFile*"
    ]
}
I then re-ran and hey presto, everything working. The file wasn't in the coverage.xml file. This meant the Sonar runner could parse the file and everything was ok.

I like sonar, I like a stable build and I like rapid feedback so yeah I was a happy person when it was working again!


Saturday, May 30, 2015

Using separate Postgres Schemas for the same database in a Grails App

Recently, I wanted to use the same Postgres Database but split my persistence layer into separate components which used separate schemas. The motivation was to promote modular design, separate concerns and stop developers tripping up over each other. Vertical domain models can be difficult to achieve but not impossible.

In my shopping application, I had a user component, a shopping component and a product component. Now this is pretty easy if you are using separate databases, but sometimes it's nice to just get the separation of concerns using separate schemas in the same database, since using the same database can make things like DR, log shipping, replication etc easier.

While I could find doc for separate databases, I found it difficult to find Grails doc to advice on my specific problem - how to use separate schemas when using the same database when using Postgres. So here is how I ended up doing it.

Here is my datasource.groovy.
String db_url = "jdbc:postgresql://localhost:5432/testdb"
String usernameVar = "db_user"
String passwordVar = "db_secret"
String dbCreateVar = "update"
String dialect = "net.kaleidos.hibernate.PostgresqlExtensionsDialect"

dataSource_user {
    pooled = true
    jmxExport = true
    dialect = dialect
    driverClassName = "org.postgresql.Driver"
    username = usernameVar
    password = passwordVar
    url = platform_url
    dbCreate= "validate"
}

hibernate_user {
    cache.use_second_level_cache = false
    cache.use_query_cache = false
    cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory' // Hibernate 3
    singleSession = true // configure OSIV singleSession mode
    default_schema = "user"
}

dataSource_shopping {
    pooled = true
    jmxExport = true
    dialect = dialect
    driverClassName = "org.postgresql.Driver"
    username = usernameVar
    password = passwordVar
    url = platform_url
    dbCreate = "validate"
}

hibernate_shopping {
    cache.use_second_level_cache = false
    cache.use_query_cache = false
    cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory' // Hibernate 3
    singleSession = true // configure OSIV singleSession mode
    default_schema = "shopping"
}

dataSource_product {
    pooled = true
    jmxExport = true
    dialect = dialect
    driverClassName = "org.postgresql.Driver"
    username = usernameVar
    password = passwordVar
    url = platform_url
    dbCreate= "validate"
}

hibernate_product {
    cache.use_second_level_cache = false
    cache.use_query_cache = false
    cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory' // Hibernate 3
    singleSession = true // configure OSIV singleSession mode
    default_schema = "product"
}
Note: there are some obvious optimisations in config above, but the above just makes explaining simple.

I then mapped each GORM object to the appropriate schema.

class Cart {
    // ...
    // ...
    static mapping = {
        datasource 'shopping'
        // ... 
    }
}

class Address {
    // ...
    // ...

    static mapping = {
        datasource 'user'
    }
}

class Stock {
    // ...
    // ...

    static mapping = {
        datasource 'product'
    }
}
I then started my app and said "yippe, this works" had a little tea break and moved onto the next problem. As can be seen the trick is to use a separate hibernate closure, specify the schema in there and name the closure using the same naming format for separate database, but make the database closures point to the same database.

Sunday, June 29, 2014

Book Review: REST API Design Rulebook (Mark Masse)

RESTful style architectures are becoming more and more ubiquitous. There are many reasons for this:
  • Web Tiers are full of JavaScript yearning to make nice simple AJAX requests
  • The obvious shortcomings of SOAP style strong contracts 
  • They are a nice alternative to ESB's for integrating heterogeneous architectures
However, like the MVC design pattern or the Agile Software Methodology, while many projects may be claiming to be using the RESTful architectural approach everyone is doing it differently.

In 2010, Martin Fowler - in his excellent blog - discussed the Richardson Maturity Model. This model provided a very good categorisation technique to assess the degree of RESTfullness based on how many core REST principles were being used.  Although, that same model gets a reference in Mark Masse's REST API Design Rulebook, Masse's book goes much more into low level detail about various REST best practises.

For example:
  • Negative caching: adding cache headers not just for positive responses but to 3xx and 4xx responses.  This is something that may not be obvious, but could be a good performance / scalability boost depending on the nature of your application and user usage patterns etc.
  • How to version your URIs, your representational forms and your resource objects
  • Using a consistent forms to represent link relations 
In addition, there is a abundance of other ideas and guidelines, some pretty simple but nonetheless important:
  • Don't end URI's with a slash
  • Don't use underscores in URI paths
  • Put "api" in the domain part of your rest resource path, e.g. 
https://api.dropbox.com/1/oauth/request_token
The main reason why I think it is good to have a book like this is because when a development team are trying to use a REST style architecture, disagreements, misunderstanding will inevitably happen. For example, the proverbial: 'Should a URI end in a plural or singular noun?'
It is always good to be able to reference a respected industry resource to prevent rabbit holes appearing and eating into your valuable project time.

Furthermore, there are some really quick and easy things you can do to make a much a better REST API that are discussed in the book. For example:
  • Adding an ETag HTTP header to that shopping cart resource as items go in and out of it.
  • Using query fields to generate partial responses and using the ! for an excludes option.
Now for some constructive criticism.  Firstly, I don't there will ever be complete consistency in REST approaches. Some of the so called best practises could be argued to be just subjective or nice-to-haves. They are not something that are going to make a big difference to either functional or non-functional aspects of your architecture. Some of the industry leaders not only take different approaches in their REST APIs, but they are also sometimes doing the opposite of what Massé suggests. For example, Massé suggests not to include a file extension type in your REST URLs (see Chapter 2), but  (at the time of writing) Twitter do just that (the URI is: https://api.twitter.com/1.1/statuses/mentions_timeline.json)

Furthermore, in a lot of projects you will be writing a REST API purely to serve JSON back to a Web client.   This is quite different to a public facing API on the scale of Twitter, Amazon etc.   Therefore you need to ask yourself if you really need to put the time investment in so that you are adhering to every single REST best practise instead of just doing what makes sense for your project.  Some obviously make sense, some could be argued to be overkill. For example making sure you are sending back the correct HTTP code when you know the client never even checks if it is a 403 or 405.

I do think this book is written more as if you were designing a public facing API.  If you find yourself in that situation you should definitely, definitely, definitely be considering everything Massé is saying in the book.  But note, the emphasis is on considering which doesn't always mean adhering or adopting.

The book does a very good job in covering best practises that a lot of developers just wouldn't think of (setting response headers such as content-length, setting the location header in responses for newly created resources, how to support HTTP 1.0 when you are trying to discourage caching) and is definitely worth a read but you really have to think about what makes sense for your project. As stated, some of the suggestions are quick wins others you have to assess the cost and the benefit. Following the core basic REST principles (statelessness, a good resource model, uniform interface etc.) is what is really important after that the challenge is figuring out what works best for each specific project and how to make the most of your time. That will vary from project to project and will depend on project scope, scale etc.   A good architectural decision should always consider the various options but make the appropriate decision for the appropriate project.

Until the next time, take care of yourselves.







Tuesday, June 17, 2014

MongoDB and Grails

So recently, I had a requirement to store unstructured JSON data that was coming back from a web service. The web service was returning back various soccer teams from around the world. Amongst the data contained in most of the soccer teams was a list of soccer players, who were part of the team. Some of the teams had 12 players, some had 20 some had even more than 20. The players had their own attribute, some were easy to predict some impossible. For the entire data structure, the only attribute that I knew would definitely be coming back was team's teamname. After that, it depended on each team.
{
   "teams": [{
       "teamname":"Kung fu pirates",
       "founded":1962,
       "players": [
          {"name": "Robbie Fowler", "age": 56},
          {"name": "Larry David", "age": 55}
          ...
        ]},
        { 
        "teamname":"Climate change observers",
        "founded":1942,
        "players": [
          {"name": "Jim Carrey", "age": 26},
          {"name": "Carl Craig", "age": 35}
          ...
        ]},
        ...
   ]

}
There are several different ways to do store this data. I decided to go for MongoDB. Main reasons:
  • I wanted to store the data in as close as possible format to the JSON responses I was getting back from the web service. This would mean, less code, less bugs, less hassle.
  • I wanted something that had a low learning curve, had good documentation and good industry support (stackoverflow threads, blog posts etc)
  • Something that had a grails plugin that was documented, had footfall and looked like it was maintained
  • Features such as text stemming were nice to have's. Some support would have been nice, but it didn't need to be cutting age.
  • Would support good JSON search facilities, indexing, etc
MongoDB ticked all the boxes. So this is how I got it all working. After I installed MongoDB as per Mongo's instructions and the MongoDB Grails plugin, it was time to write some code. Now here's the neat part, there was hardly any code. I created a domain object for the Team.
class Team implements Serializable {

    static mapWith = "mongo"

    static constraints = {
    }

    static mapping = {
        teamname index: true
    }

    String teamname

    List players
    static embedded = ['players']
}
Regarding the Team domain object:
  • The first point to make about the Team domain object was that I didn't even need to create it. The reason why I did use this approach was so that I could use GORM style api's such as Team.find() if I wanted to.
  • Players are just a List of object. I didn't bother creating a Player object. I like the idea of always ensuring the players for the team were always in a List data structure, but I didn't see the need to type anything further.
  • The players are marked as embedded. This means the team and players are stored in a single denormalised data structure. This allows - amongst other things - the ability to retrieve and manipulate the team data in a single database operation.
  • I marked the teamname as a index.
  • I marked the domain object as
    static mapWith = "mongo"
    This means that if I was also using another persistence solution with my GORM (postgres, MySQL, etc.) I am telling the GORM that this Team domain class is only for Mongo - keep your relational hands off it. See here for info. Note: This is a good reminder that the GORM is a higher level of abstraction than hibernate. It is possible to have a GORM object that doesn't use hibernate but instead goes to a NoSQL store and doesn't go near hibernate.
You'll note that in the JSON that there are team attributes such as founded that haven't been explicitly declared in the Team class. This is where Groovy and NoSQL play really well with each other. We can use some of the Meta programming features of Groovy to dynamically add attributes to the Team domain object.
private List importTeams(int page) {
    def rs = restClient.get("teams") // invoke web service
    List teams = rs.responseData.teams.collect {
         teamResponse ->
                Team team = new Team(teamname: teamResponse.teamname)
                team.save(); // Save is needed to dynamically add the attribute
                teamname.each {key, value ->
                    team["$key"] = value 
                }
                teamname.save(); // We need the second save to ensure the variants get saved.
                return teamname
        }
    log.info("importTeams(),teams=teams);
    teams
}
Ok, so the main points in our importTeams() method
  • After getting our JSON response we run a collect function on the teams array. This will create the Team domain objects.
  • We use some meta programming to dynamically add any attribute that comes back in the JSON team structure to the Team object. Note: we have to invoke save() first to be able to dynamically add the attributes that are declared in the Team domain object to the Team domain object. We also have to invoke save() again to ensure that attributes that are declared in the Team domain object to ensure they are saved. This may change in future versions of the MongoDB plugin, but it is what I had to do to get it working (I was using MongoDB plugin version 3.0.1)
So what's next? Write some queries. Ok so two choices here. First, you can use the dynamic finders and criteria queries with the GORM thanks to the MongoDB plugin. But, I didn't do this. Why? I wanted to write the queries as close as possible to how they are supposed to be written in Mongo. There were a number of reasons for this:
  • A leaky abstraction is inevitable here. Sooner or later you are going to have to write a query that the GORM won't do very well. Better to approach this heads on.
  • I wanted to able to run the queries in the Mongo console first, check explain plans if I needed to and then use the same query in my code. Easier to do this, if I write the query directly without having to worry about what the GORM is going to do.
The general format of queries is:
teams = Team.collection.find(queryMap) // where queryMap is a map of fields and the various values you are searching for. 
Ok, some examples of queries...
Team.collection.find(["teamname":"hicks"]) // Find a team name hicks
Team.collection.find(["teamname":"hicks", "players.name": "Robbie Fowler"] // As above but also has Robbie Fowler
Team.collection.find(["players.name": "Robbie Fowler"] // Any teams that has a Robbie Fowler
Team.collection.find(["teamname":"hicks", "players.name": "Robbie Fowler", {"players.$":1}]  // Returns matching player only
Team.collection.find(["teamname":"/ick/"]) // Match on the regular expression /ick/, i.e. any team that contains text ick.
Anything else? Yeah sure. I wanted to connect to a Mongo instance on my own machine when in development but to a Mongo machine on a dedicated server in other environments (CI, stage, production). To do this, I updated my DataSource.groovy as:
environments {
    development {
        grails {
            mongo {
                host = "localhost"
                port = 27017
                username = "test"
                password = "test"
                databaseName = "mydb"
            }
        }
        dataSource {
            dbCreate = "create-drop" // one of 'create', 'create-drop', 'update', 'validate', ''
            url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000"
        }
    }

    
    ci {
        println("In bamboo environment")
        grails {
            mongo {
                host = "10.157.192.99"
                port = 27017
                username = "shop"
                password = "shop"
                databaseName = "tony"
            }
        }
        dataSource {
            dbCreate = "create-drop" // one of 'create', 'create-drop', 'update', 'validate', ''
            url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000"
        }
    }
}
You'll see I have configured multiple datasources (MongoDB and PostGres). I am not advocating using both MongoDB and a relational database, just pointing out it is possible. The other point is that the MongoDB configuration is always under:grails { mongo {

Ok this is a simple introductory post, I will try to post up something more sophisticated soon. Until the next time, take care of yourselves.