Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

Tuesday, March 14, 2017

Consumer Driven Testing with Pact & Spring Boot

Recently a colleague of mine stumbled across Pact.io,  Our current application had grown to over 50 services and we we're starting to have some integration test failures and a brittle dev / acceptance test environment. So we decided to have a look at ways to try help with this.

I started out by reading:
https://docs.pact.io/faq/convinceme.html

Then watching:
https://www.youtube.com/watch?v=-6x6XBDf9sQ&feature=youtu.be

Those 2 resources convinced me to give it a shot.

So I set out and created a quick set of Spring boot apps, the GitHub repo here, to test out the concepts and get everything working.

To highlight some important bits from the demo.

Consumer:
As Pact is a consumer driven test framework. This is where you define a unit test, that test mocks the http server response and you assert against that.
Once the test is successful it creates a pact json file in the /pacts directory.

So after the "mock" test is run and the pact file has been created. You need to include a maven plugin ...pact... that is then used to publish the content of the pacts/ folder to the pact broker... which is defined in the pom as below.

Producer:

This uses the JUnit integration from Pact.io to download the pacts from the broker and then run against an running service.

Since this already uses a @RunWith annotation, I could not use the spring boot runner. So to get around that as a before class step, I start the Spring boot application, the pacts then gets run against that running instance... and the boot application gets stopped again after the tests. Depending on your use case I guess it would also be an option to do this with @Before so you get a new service instance started before each pack, but that would slow down the execution tremendously.

The @State annotation, allows for clients to define a specific state, which the producer can the use to setup additional data / conditions required for the test to run.

Once the pacts have executed against the service there are reports generated in the target folder.




Setting up the Pact Broker

1. Grab the public images from Docker Hub.
docker pull dius/pact_broker
docker pull postgres

2. Then setup the Postgres DB
docker run --name pactbroker-db -e POSTGRES_PASSWORD=ThePostgresPassword -e POSTGRES_USER=admin -d postgres
docker run -it --link pactbroker-db:postgres --rm postgres psql -h postgres -U admin
CREATE USER pactbrokeruser WITH PASSWORD 'TheUserPassword';
CREATE DATABASE pactbroker WITH OWNER pactbrokeruser;
GRANT ALL PRIVILEGES ON DATABASE pactbroker TO pactbrokeruser;
3. Once the DB is up, run the actual Broker:
docker run --name pactbroker --link pactbroker-db:postgres -e PACT_BROKER_DATABASE_USERNAME=pactbrokeruser -e PACT_BROKER_DATABASE_PASSWORD=TheUserPassword -e PACT_BROKER_DATABASE_HOST=postgres -e PACT_BROKER_DATABASE_NAME=pactbroker -d -p 80:80 dius/pact_broker


Extra References:

https://docs.pact.io/documentation/
https://docs.pact.io/documentation/sharings_pacts.html
https://github.com/DiUS/pact-jvm
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-consumer-junit

Get the example project
https://github.com/bdupreez/pactdemo


Saturday, July 19, 2014

TDD, Hamcrest, Shazamcrest

Recently we have started to try get a more TDD culture started at work, having always believed in thorough testing and decent code coverage it shouldn't have been too hard. However... teaching a old dog new tricks can sometimes require quite a bit of patience. Turns out breaking coding habits formulated of more than a decade of keyboard bashing is harder than it seems.

So with generating an enormous amount of test code, comes the usual task code & test maintenance and reuse.
One of the tools / libraries we have included is Hamcrest, which not only improves the readability of assertion failures, but allows you to create and extend custom matchers, which you can then reuse across multiple test scenarios.

I am not going to go into too much detail on Hamcrest here, where are a bunch of great resources / blogs / tutorials out there.. just a few:
http://www.baeldung.com/hamcrest-collections-arrays
https://weblogs.java.net/blog/johnsmart/archive/2011/12/12/some-useful-new-hamcrest-matchers-collections
http://edgibbs.com/junit-4-with-hamcrest/
http://www.planetgeek.ch/2012/03/07/create-your-own-matcher/

While creating a custom type safe matcher for one of our domain objects, I realised that was insane.. really.. this.getA == that.getA... mmmm no.
So I went searching for something could help and and after a bit, I found: Shazamcrest (bonus points for the name)
What Shazamcrest does is:
Serialize the objects to compare.
Compares them and then on fail throws a ComparisonFailure, which the major IDE's allow you use their build in diff display.

Great... no manual bean compares.
So I add the maven dependency, try it out on our complex domain object....
StackOverflowError.... It was a known limitation at the time. The json provider Shazamcrest was using: 
GSON does not cater for circular reference serialization.

As both Shazamcrest and GSON being opensource, I decided to have a look and see if I could contribute, anything is better that writing a manual bean matcher. After some investigation I found that the guys on the GSON project have created a fix GraphAdapterBuilder, it is just not distributed with the actual library.

So after fork on the Shazamcrest GitHub project, a little bit of code and submitting a pull request:

The guys on the Shazamcrest project very quickly merged my changes in and published a new version to the maven repo (Thanks for that). 
So be sure to use the 0.8 version if you are struggling with circular references.






Tuesday, April 5, 2011

Little Spring Gem: ReflectionTestUtils

I have recently been slacking on content on my blog, between long stressful hours at work and to the wonderful toy that is an iPhone, I have taken a little break from anything "development" related after leaving the office to maintain my sanity. Now with the project delivered, and things quieting down again I can re-focus my excess neuron energy back to processing more IT related information.

One little thing I discovered in my last project, which I am a little embarrassed about, being a Spring nut, is the little gem that is : ReflectionTestUtils.

I mostly try write unit tests that do not include Spring, for the reason, that you should be testing your code and not Spring config. However sometimes it's really useful and quite beneficial to have your test code wired up with Spring, be that for integration tests or just to extend your test suite.
One issue was, I always found myself adding "setters" to my component interfaces, I always hated doing it, but I would con myself by saying: "It was for more testing, and more testing is always a good thing", and move along. I eventually (while procrastinating on work I didn't really want to do) went searching for a cleaner solution.

1 search and 1 minute later, a little red faced, I kicked myself. I should have guessed straight away that Spring would have thought of this scenario From 2.5 Spring had the ReflectionTestUtils class, which simply lets you set your dependencies / mocks via reflection.

So easy, tell it which object, the name of the field and set the mock / value you want to set. Neater interfaces, good times.
Below is an example using EasyMock interfaces and inject them for my test.

Saturday, August 28, 2010

Cargo, Maven and the benefits of integrated functional testing

I little while back an ex-colleague mentioned that I should have a look at an open source project called Cargo. So what is Cargo, their mission statement states "Cargo is a thin wrapper that allows you to manipulate Java EE containers in a standard way." and that is exactly what they do.

In 2 of my previous posts:
Some useful Maven configurations
and
Your POM is quite nice

I went through some maven configurations and practices along with the concept of continuous integration. This article and Cargo make a logical "Part 3". In the above 2 articles there is a lot of emphasis on continuous testing, but with some very large / legacy systems total unit test coverage is something of a luxury and quite often neglected. Depending on the size, type and deployment of the application there will be certain circumstances where to quickly gain, promote and gauge system stability, integration or functional testing will be a much faster and more beneficial practice than retro fitting unit test coverage.

Tools like Selenium, QTP (QuickTestPro) a number of others, allow you to simply record, playback and validate the application. In my current environment and I am sure in a lot of other places, these types of tests are run by people outside the development team, namely testing teams, clients or even business analysts. I personally feel to gain the most benefit out of functional testing and automation tools these need to happen at build time. Having these tests run outside of the standard development environment add extra unneeded iterations, and more external dependencies that can delay a software project. Bringing these type of tests into you build process allow you detect, maintain and repair system issues before influencing the world outside the development environment.

Like everything else surrounding the "continuous integration" philosophy, automation is key. This is were Cargo helps you out. If you already have the other tools like Maven and Hudson setup it is very quick to integrate and benefit from Cargo.

My environment runs on Weblogic, so that is what I will be using as my example, but Cargo supports about all the major web and application servers used. So this should work for anyone.

Maven has the concept of phases which can be thought of like a collection of goals. A full list of the phases of Maven is available here: Lifecycle Reference

The 3 we are interesting in with regards to Cargo are:
pre-integration-test
perform actions required before integration tests are executed. This may involve things such as setting up the required environment.
integration-test
Process and deploy the package if necessary into an environment where integration tests can be run.
post-integration-test
Perform actions required after integration tests have been executed. This may including cleaning up the environment.

So any phase higher than ’pre-integration-test’ will trigger the deployment to your application server.

Cargo can only stop and start local application servers, deployments can take place remotely, not all application servers have "deployers", detail for these are on the Cargo site. The one feature I am very happy with is the server properties. You can from within you POM configure all the required, JDBC, JMS, and JVM settings you require for your application server to function, which is awesome. In my examples I just needed JVM and JDBC settings, but theoretically you should be able to configure most things you may require.

You could integrate the Cargo plugin into your existing POMs, I prefer to keep it separate. I am not including any Selenium configuration to keep this post about Cargo. However here is a very nice Maven Selenium Guide.

So for the local server configuration, this will startup the application server, create the JDBC configuration and deploy the war.

Building KubeSkippy: Learnings from a thought experiment

So, I got Claude Code Max and I thought of what would be the most ambitious thing I could try "vibe"? As my team looks after Kuber...