preloader
Testing with Testcontainers

Testing with Testcontainers

Testing with Testcontainers

Tests are an integral part of the development process. No feature or functionality can be considered done without a set of tests implementing scenarios, which verify that what’s implemented works according to specifications. If we talk in terms of functional or integration tests, more than often they depend upon some infrastructure, like some kind of database, messaging queue, distributed cache, etc.

Usually, when we want to create integration tests (for e.g to test persistence layer), it is convenient to load up the in-memory database for that purpose. This although easy to set up and get started, carries some drawbacks and challenges with it.

If tests pass against the in-memory database, that does not necessarily mean that the tested code will work correctly against the production database. This can be due to the fact that some vendor-specific things of the production database are simply not supported by the in-memory database. For example, Postgres has JSONB type, which is not supported by H2 or HSQLDB. To overcome this issue, one route is to implement some workarounds, or simply not cover that part with tests, but this is not ideal, to say the least, because we end up rewriting code to accommodate tests or risk leaving code untested. Another example that comes to mind is a virtual column in Oracle and MySQL databases. Those are just a few examples related to RDBs, in case that we have a hybrid setup with NoSQL database, messaging queue, cache… things became more challenging really fast.

One option to overcome these issues is to run our tests against the specific production-like environment. Since this environment is probably already present as part of the deployment and delivery process, this seems like a solid option for testing needs. But there are some drawbacks, main are that this means that we cannot easily run a test from a local development environment (for e.g. additional configuration needs to be done). This environment might not always be available (it might be used for performance testing, or the environment simply goes down). Also, when we deploy our application and tests onto the environment and run tests there, the feedback loop slows down, so any errors are detected and fixes are implemented much later.

Testcontainers use case

Luckily, there is an easier way to manage all these challenges by using testcontainers.

Examples in this blog post are implemented using Java 8, Spring Boot, Spring Boot Test, JUnit, Testcontainers (uses latest Postgres image), Gradle, and Liquibase. Basic knowledge of those technologies is assumed, but in any case, core principles and ideas presented here should be applicable when using Testcontainers with any other technology stack.

There are two ways to use Testcontainers – managed by Java test code, or managed by build tools like Gradle. The approach described here will focus on Java code usage. The second approach is outside of the scope of this post, so for more info check official documentation and examples provided by the community.

The first step is to add a dependency to the project. For this example, we’ll use org.testcontainers.postgressql dependency, which is specialized for supporting Postgres docker container. There is generic module testcontainers dependency that supports generic containers, docker-compose… For more info, consult usage documentation. Dependency is added to build a Gradle file:

testImplementation(’org.testcontainers:postgresql:1.7.1’) // Note: this is the version at the time of writing this blog post, latest artifact version should be used.

Now when we have Testcontainers on our classpath, the easiest way to initialize our Postgres container is to create an instance like so:

final PostgreSQLContainer postgreSQLContainer = new PostgreSQLContainer();

This will initialize Postgres container with sensible defaults (check PostgreSQLContainer class for details). After that, we could start container by invoking postgreSQLContainer.start();.

We could also make the instance public and static and mark it with @ClassRule, which will automatically start and then stop container after all tests are executed.

A container is by default started on random port, to avoid potential conflicts. We can customize username, password and database name during initialization:

private static final PostgreSQLContainer postgreSQLContainer = new PostgreSQLContainer().withUsername(“user01”).withPassword(“pass01”) .withDatabaseName(“testDatabase”);

but we cannot reconfigure port. Also, the port is assigned when the container is started, so there is no way to know which port is going to be used before we actually start the container.

This presents a challenge, assuming that the application has an instance configured to communicate with the database. In that case, we also need to configure the datasource instance for our integration tests.

One option here is to use a specialized container instance called

FixedHostPortGenericContainer:
@ClassRule
public static FixedHostPortGenericContainer postgreSQLContainer =
new FixedHostPortGenericContainer<>(“postgres:latest”)
.withEnv(“POSTGRES_USER”,”testUser”)
.withEnv(“POSTGRES_PASSWORD”,”testPassword”)
.withEnv(“POSTGRES_DB”,”testDb”)
.withFixedExposedPort(60015);

In the example above, we fixed port on 60015, so now before we start container we can configure our datasource instance using JDBC connection string:

“jdbc:postgresql://”
+ DockerClientFactory.instance().dockerHostIpAddress()
+ “:60015/testDb”;

This approach is less than ideal because there are no guarantees that port 60015 will always remain open, and that we won’t have some conflicts down the line. Taking that into the account, we must leave port assignment dynamic, but somehow initialize datasource instance, and with it, ideally, the Liquibase instance which can be used to (re)create the database schema. This would require the setup of application context after the container has been started. So for example, we can have following test configuration class:

@TestConfiguration
public class TestRdbsConfiguration {
@Bean
public PostgreSQLContainer postgreSQLContainer() {
final PostgreSQLContainer postgreSQLContainer = new PostgreSQLContainer();
postgreSQLContainer.start();
return postgreSQLContainer;
}
@Bean
public DataSource dataSource(final PostgreSQLContainer postgreSQLContainer) {
// Datasource initialization
ds.setJdbcUrl(postgreSQLContainer.getJdbcUrl());
ds.setUsername(postgreSQLContainer.getUsername());
ds.setPassword(postgreSQLContainer.getPassword());
ds.setDriverClassName(postgreSQLContainer.getDriverClassName());
// Additional parameters configuration omitted
return ds;
}
@Bean
public Liquibase liquibase(final DataSource dataSource) throws LiquibaseException, SQLException {
final Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new         JdbcConnection(dataSource.getConnection()));
return new liquibase.Liquibase(Paths.get(“.”, this.get(“.”, PATH_TO_CHANGELOG_FILE)
.normalize()
.toAbsolutePath()
.toString(), new FileSystemResourceAccessor(), database);
}
}

And then in our integration test class, we can do for example:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = TestRdbsConfiguration.class,
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class SomeIntRdbsTest {
@Autowire
public PostgreSQLContainer postgreSQLContainer;
@Autowired
private Liquibase liquibase;
// Recreate database scheme before each test so no data interdependencies are introduced
@Before
public void before() throws LiquibaseException {
liquibase.dropAll();
liquibase.update(new Contexts());
}
// Test methods …
}

Using this approach is much better as opposed to hardcoding the port. Only thing is not to forget to add test configuration class to SpringBootTest annotation so it’ll be picked up. Another similar approach came after scouring the community for Testcontainers and Spring Boot testing topics. The idea is to create an application context initializer class which will after the container has been started, create and configure Liquibase and datasource beans. So for e.g., we define a class:

public class LbAndDsInitializer implements
ApplicationContextInitializer<ConfigurableWebApplicationContext> {

   public static final ThreadLocal<PostgreSQLContainer> PG_CONTAINER = ThreadLocal.withInitial(() -> null);

   // We override initialize method:
@Override
public void initialize(ConfigurableWebApplicationContext applicationContext) {
final PostgreSQLContainer postgreSQLContainer = PG_CONTAINER.get();
try {
if (postgreSQLContainer != null) {
// We initialize data source same way as before
final DataSource dataSource = initializeDataSource(postgreSQLContainer);
applicationContext.getBeanFactory().registerSingleton(“dataSource”,
dataSource);
// We initialize liquibase same way as before
final Liquibase liquibase = initializeLiquibase(dataSource);
applicationContext.getBeanFactory().registerSingleton(“liquibase”,
liquibase);
}
} catch (LiquibaseException | SQLException e) {
// Do something with the exception
}
}

As shown in this example, we initialize datasource and Liquibase beans the same way as in the previous example, the only thing here is that we are explicitly putting beans into application context. So now in our test class, we need to start Postgres testcontainer before context initialization and pass it to our initializer class so configuration can be completed before tests are executed. Before all that we need to tell our test class which context initializer to use:

@ContextConfiguration(initializers = LbAndDsInitializer.class)
public class SomeIntRdbsTest

Then we create Postgres testcontainer and define test class rule which will set the container in our initializer:

@ContextConfiguration(initializers = LbAndDsInitializer.class)
public class SomeIntRdbsTest {
private static final PostgreSQLContainer postgreSQLContainer = new PostgreSQLContainer();
@ClassRule
public static TestRule exposePortMappings = RuleChain.outerRule(postgreSQLContainer).
around(SomeIntRdbsTest::apply);
private static Statement apply(Statement base, Description description) {
return new Statement() {
@Override
public void evaluate() throws Throwable {
LbAndDsInitializer.PG_CONTAINER.set(postgreSQLContainer);
base.evaluate();
}
};
}
}

Now when application contexts start we have set our datasource and Liquibase beans correctly so we can access database in testcontainer.

Summary

Testcontainers present a very good option to quickly bring up the infrastructure needed for integration testing, giving more control to the developer. There are specialized container options for databases (Postgres, MySQL, Oracle, and Virtuoso), Selenium driver, and others. If this is not enough, generic containers can be used, which could take an image from both public and private (some extra configuration is needed) docker repository, and could be customized for specific test needs. When using JUnit and Spring Test make sure to leverage Rules to automatically handle startup, stopping, and cleanup of containers. Use custom test configuration classes, or initializers to configure beans or populate property values with testcontainer parameters. Ideas on how to obtain container parameters and initialize necessary beans and application context that’s shown here present just a few patterns, as I’m sure there are other ways to achieve a similar thing.

Links and references

• Official testcontainers documentation – https://www.testcontainers.org/

A quick guide to Elasticsearch Java clients – Part 3

A quick guide to Elasticsearch Java clients – Part 3

A quick guide to Elasticsearch Java clients [Part 3]

In previous blog posts (part 1, part 2), we’ve seen some basic features of Jest and Spring data elasticsearch clients, and in this third and final part, we’ll highlight some of the features of the official Elasticsearch High REST API client and give an overall conclusion for entire blog post series.

Quick guide to Elasticsearch Java clients [Part 3]

Official Elasticsearch REST API clients

Before we go into the Elasticsearch REST API clients, we will quickly mention native clients.

A native client doesn’t use REST API, but instead, the whole elasticsearch application is added as a dependency into the client application, and the application becomes a node that joins the cluster. This node can be configured not to be the master node, and not hold any data. Because the application becomes part of the cluster, it is aware of its state and knows to which node to go for data. This makes this setup good in terms of performance. Some disadvantages of this approach can be seen when we want to horizontally scale our application, then we could potentially end up with a bunch of nodes that are joined into a cluster, but hold no data. Other than that TransportClient which is used to retrieve native client instance, is to be deprecated starting with version 7 of Elasticsearch, and be completely removed by version 8, according to the documentation

Another option is to use one of the REST clients. We’ll showcase here usage of Java High-Level REST Client.

There are two types of Java REST Client, Low-Level REST Client, and High Level REST Client. The low-level client communicates with Elasticsearch server through HTTP, internally using Apache Http Async Client to issue HTTP requests, but leaves the rest of the work like (un)marshaling to the developer. On the other hand, a high-level client is built on top of a low-level client, and exposes REST API methods, and takes care of (un)marshaling work.

Java High Level REST Client requires minimally Java 1.8 and to have Elasticsearch core dependency on classpath. To use Java High Level REST Client following dependencies need to be added to project:

<dependency>

    <groupId>org.elasticsearch.client</groupId>

    <artifactId>elasticsearch-rest-high-level-client</artifactId>

    <version>6.2.2</version>

</dependency>

<dependency>

    <groupId&gtorg.elasticsearch</groupId>

    <artifactId>elasticsearch</artifactId>

    <version>6.2.2</version>

</dependency>

For Maven projects in pom.xml, or in build.gradle for Gradle projects:

compile(‘org.elasticsearch.client:elasticsearch-rest-high-level-client:6.2.2’)

compile(‘org.elasticsearch:elasticsearch:6.2.2’)

Configuring high level client is pretty straight forward:

RestHighLevelClient esClient = new RestHighLevelClient(RestClient.builder(new HttpHost(“127.0.0.1”, 9200, “http”)));

High-level client needs to be explicitly closed, so all underlying resources are released. This could be done by calling esClient.close(); when the client is no longer needed.

To do this, we have few options, for e.g. use try-with-resources and create new instance of client every time that we need it, which gets automatically closed after try block is executed:

try(RestHighLevelClient esClient = new RestHighLevelClient(RestClient.builder(new HttpHost(“127.0.0.1”, 9200, “http”)))) {

       // do something with the client

       // client gets closed automatically

} catch(IOException e) {

;       // log any errors

}

Or if client is Spring Boot application, we could create client once as a @Bean, and then one of the ways to do close it, is to create spring @Component with @PreDestroy annotated method in which esClient.close() gets called, for e.g.:

@PreDestroy

public void closeEsClient() {

       try {

             esClient.close();

       } catch (IOException e) {

             System.out.println(“Error while closing elasticsearch client”);

       }

}

This should (there are some scenarios when this won’t work, but this topic is out of the scope) at least, make sure to close the client when the application is gracefully shutdown or a kill signal is sent. This is because Spring Boot automatically registers singleton methods annotated with @PreDestroy as shutdown hooks.

Now with that out of the way, we can create index:

CreateIndexRequest request = new CreateIndexRequest(“comment”);

CreateIndexResponse createIndexResponse = esClient.indices().create(request);

This is synchronous request, there is also async alternative createAsync() to which besides request you need to supply listener implementation describing how to handle response or failure scenarios when the response is available.

Using createIndexResponse we can verify whether or not all nodes acknowledged request: createIndexResponse.isAcknowledged().

Similarly, we can issue requests to open – using OpenIndexRequest, close – using CloseIndexRequest and delete – using DeleteIndexRequest our index, and verify acknowledgment from appropriate responses.

If we imagine that we have an instance of simple POJO like:

public final class Comment {

    private String name;

    private String message;

    // Constructors, getters, setters, and other methods …

}

That we want to index, one of the ways to do that is to include mapping processor like Jackson in classpath of our project by adding:

<dependency>

    <groupId>com.fasterxml.jackson.core</groupId>

    <artifactId>jackson-core</artifactId>

    <version>2.9.4</version>

</dependency>

in pom.xml for Maven project, or in build.gradle for Gradle projects:

compile(‘com.fasterxml.jackson.core:jackson-core:2.9.4’)

and to index Comment instance like this:

Comment comment = new Comment(“user1”, “This is test comment”);

ObjectMapper mapper = new ObjectMapper();

String stringToIndex = mapper.writeValueAsSting(comment);

IndexRequest request = new IndexRequest(stringToIndex, “comment”);

request.source(stringToIndex, XContentType.JSON);

IndexResponse response = client.index(request);

if(response.status() == RestStatus.CREATED) {

    System.out.println(“Index created”);

}

Now we can search for all comments and retrieve all hits:

SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();

sourceBuilder.timeout(new TimeValue(600, TimeUnit.SECONDS)); // Request timeout

sourceBuilder.from(0);

sourceBuilder.sort(new ScoreSortBuilder().order(SortOrder.DESC)); //Result set ordering

BoolQueryBuilder query = new BoolQueryBuilder();

query.must(new MatchQueryBuilder(“user”, “user1”));

query.must(new MatchQueryBuilder(“message”, “This is test comment”));

sourceBuilder.query(query);

SearchRequest searchRequest = new SearchRequest(“comment”);

searchRequest.source(sourceBuilder);

final SearchResponse searchResponse = esClient.search(searchRequest);

SearchHits hits = searchResponse.getHits();

BooleanQueryBuilder from the example above will produce the following search query (shown only with important fields for this example):

{

  “bool” : {

    “must” : [

      {

        “match” : {

          “user” : {

            “query” : “user1”

          }

        }

      },

      {

        “match” : {

          “message” : {

            “query” : “This is test comment”

          }

        }

      }

    ]

  }

}

If there are many results hits, it is a good idea to retrieve them paginated. This is achieved by using elasticsearch scrolls. So if we want to search for all comment messages for user1, for e.g.:

{

  “bool” : {

    “must” : [

      {

        “match” : {

          “user” : { “query” : “user1” }

        }

      }

    ]

  }

}

First we need to setup search request similarly to example above but with the few additions (note additional settings highlighted):

SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();

sourceBuilder.timeout(new TimeValue(600, TimeUnit.SECONDS));

sourceBuilder.from(0);

sourceBuilder.size(10); // Size of result hits in scroll

sourceBuilder.sort(new ScoreSortBuilder().order(SortOrder.DESC));

BoolQueryBuilder query = new BoolQueryBuilder();

query.must(new MatchQueryBuilder(“user”, “user1”));

sourceBuilder.query(query);

SearchRequest searchRequest = new SearchRequest(“comment”);

searchRequest.source(sourceBuilder);

searchRequest.scroll(TimeValue.timeValueSeconds(600)); // Keep scroll alive

Then we need to issue initial request to retrieve id of the scroll:

SearchResponse searchResponse = esClient.search(searchRequest);

SearchHits hits = searchResponse.getHits();

String scrollId =  searchResponse.getScrollId();

After that we could retrieve all scrolls by passing scroll id to subsequent requests:

while(searchHits != null && searchHits.getHits().length > 0) {

       final SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId);

      scrollRequest.scroll(TimeValue.timeValueSeconds(600));

      final SearchResponse searchResponse = esClient.searchScroll(scrollRequest);

       final SearchHits searchHits = searchResponse.getHits();

       assertNotNull(searchHits);

}

After all search hits are retrieved, we need to make sure to close the scroll:

ClearScrollRequest clearScrollRequest = new ClearScrollRequest();

clearScrollRequest.addScrollId(scrollId);

ClearScrollResponse clearScrollResponse = esClient.clearScroll(clearScrollRequest);

if (!clearScrollResponse.isSucceeded()) {

       System.out.println(“Could not close scroll with scroll id: ” + scrollId);

}

Besides query builders already shown, like  BoolQueryBuilder and MatchQueryBuilder, there are many more compound and full-text query builders alongside other types of builders (term level, joining …), which enables the building of sophisticated searches.

Java High-Level REST Client enables Java developers to easily do both basic and complex operations against Elasticsearch REST API, versions follow Elasticsearch server versions making migrations that much easier, and if client application jar size and memory footprint is not that critical, presents really strong candidate when considering Java clients.

Summary

In this blog post series, we’ve seen how to install a local instance of Elasticsearch server, configure it and run it. Also, we saw what are the options available in regards to Java clients, and some advantages and drawbacks for each of them. For each of the clients presented, small code snippets are presented, highlighting some of the features, like the configuration of the client, and usages of main features like indexing, searching, getting, and deleting.

Just a small example was presented here to get you quickly started on working with Elasticsearch features. Unit and integration testing are also things to keep in mind when developing elasticsearch client applications. There are also many more sophisticated configuration options for indexing and fine-tuning searches. To further expand upon those topics, the reader is encouraged to read through the documentation.

Links and references:

A quick guide to ElasticSearch Java Clients – part 1

A quick guide to ElasticSearch Java Clients – part 2

About the author:

About Dragan Torbica

Dragan Torbica is a Software Engineer with 7 years of experience mostly in Java and Spring. He believes that software is not something that can be manufactured nor can it be delivered faster by merely adding more lines of code and that writing good software requires skill and careful attention. Currently, he’s working for BrightMarbles.io as Senior Software Developer and Technical Team Lead.

A quick guide to Elasticsearch Java clients – Part 2

A Quick Guide to Elasticsearch Java clients – Part 2

A Quick Guide to Elasticsearch Java clients [Part 2]

In the previous blog post, we’ve seen how to setup a local Elasticsearch server and how to use Jest Java client, in this part we’ll see how to use Spring data elasticsearch.

A Quick Guide to Elasticsearch Java clients [Part 2]

Spring data elasticsearch

For Spring users there is Elasticsearch library under the spring data project (Official documentation).

To use spring data elasticsearch add the following dependency into pom.xml for Maven projects:

<dependencies>

    <dependency>

        <groupId>org.springframework.data</groupId>

        <artifactId>spring-data-elasticsearch</artifactId>

        <version>3.0.5.RELEASE</version>

    </dependency>

</dependencies>

<repositories>

    <repository&gt

        <id>spring-libs-release</id>

        <name>Spring Releases</name>

        <url>https://repo.spring.io/libs-release</url>

        <snapshots>

            <enabled>false</enabled>

        </snapshots>

    </repository>

</repositories>

Or for Gradle projects in your build.gradle:

dependencies {

    compile ‘org.springframework.data:spring-data-elasticsearch:3.0.5.RELEASE’

} repositories {

    maven {

        url ‘https://repo.spring.io/libs-release’

    }

}

Spring data elasticsearch provides an easy way of configuration using Java-based configuration class annotated with @Configuration.

ElasticsearchTemplate similar to JdbcTemplate, or RestTemplate, is the helper class that exposes common Elasticsearch functionality like indexing, querying, scan and scroll for pagination, with POJO to document mapping included.

A really strong feature of spring data elasticsearch is the ability to express mapping metadata using annotations like @Document, @Id, @Filed on indexing entity, and interacting with Elasticsearch server using Spring Repository interface proxies, which is familiar territory for Spring users. These interfaces can be extended with custom methods. When following a naming convention, search query type can be parsed and determined automatically, for e.g. if we imagine:

interface ElasticSearchCommentsRepository extends Repository<Comment, Long> {

       List<Comment> findByMessageAndUser(String message, String user);

}

method findByMessageAndUser is going to be automatically parsed and will generate appropriate search query:

{ “bool” :

    { “must” :

        [

            { “field” : {“message” : “?”} },

            { “field” : {“user” : “?”} }

        ]

    }

}

Find more info on query method parsing in this section of the documentation.

If this is not enough in terms of custom queries, we could annotate methods with @Query and provide a string query, similar to what we’ve seen earlier. Another option is to just implement an interface with concrete implementation for methods, which can provide more flexibility and might be more easily maintainable than a bunch of string queries.

There is also the NativeSearchQueryBuilder class, which enables plugging in Elasticsearch query builders, if we have them on the classpath, enabling more control and fine-tuning of the search queries. For e.g. if we would like to search for comments of a specific user, we could do something like this:

SearchQuery searchQuery = new NativeSearchQueryBuilder()

.withQuery(matchQuery(“user”, “user1”))

.build();

List<Comment> comments = elasticSearchTemplate.queryForList(searchQuery, Comment.class);

If the result set is too big, it is a good idea to get results paginated. This is done by retrieving Elasticsearch scrolls, which are identified by scroll id. For the example on how to do this, take a look at following part of the documentation and also at Elastic scrolls.

Summary

Spring data elasticsearch is a good option for Spring-based projects. It provides ease of configuration, automatic POJO to document mapping, and a familiar way of querying elasticsearch through either repository proxies or template helper class. Interface proxies enable automatic query generation, which removes the need for query builders and manual building of queries in most cases (but that option is left on the table if need be). One thing to keep in mind though when considering migration to a newer version of Elasticsearch server has spring data release train kept up. In case of incompatible versions, migrating to newer versions of the Elasticsearch server might be challenging.

In the next blog post, we are going to go over some features of the official Elasticsearch Java client, with the focus on Java High-Level REST Client.

Related Blog Posts:

Links and references:

About the author:

Dragan Torbica is a Software Engineer with 7 years of experience mostly in Java and Spring. He believes that software is not something that can be manufactured nor can it be delivered faster by merely adding more lines of code and that writing good software requires skill and careful attention. Currently, he’s working for BrightMarbles.io as Senior Software Developer and Technical Team Lead.

A quick guide to Elasticsearch Java clients – Part 1

A quick guide to Elasticsearch Java clients – Part 1

A quick guide to Elasticsearch Java clients [Part 1]

This is the first part of three parts series, where we are going to explore a simple way to configure and quickly get started working with Elasticsearch, with the main focus on going over available Java clients, and examine their advantages and drawbacks based on features available and potential use cases and needs.

In this part, we are going to focus on the Jest Java client, but before that, we are just briefly going to go over the installation and running local server instance in the next chapter.

Quick guide to Elasticsearch Java clients [Part 1]

Elasticsearch setup

We are not going to deep dive into the inner workings of Elasticsearch itself, for that purpose, there is official documentation.

For the purposes of this example, we are going to install a local instance of the Elasticsearch server. To do that, we have a few options. We can go to the official downloads page, download the zip/tar package and run an appropriate script file, depending on the host OS. If we have Docker installed, another option is to pull docker image and run Elasticsearch container, e.g.:

docker run -d -p 9200:9200 -p 9300:9300 –name es -e “discovery.type=single-node” -e “http.cors.enabled=true” -e “http.cors.allow-origin=*” docker.elastic.co/
elasticsearch/elasticsearch:6.2.2

 In either case, if startup went fine, after issuing GET request in your favorite browser on https://localhost:9200 URL, the response should look something like this:

{
“name” : “b3ZfQOl”,   “cluster_name” : “docker-cluster”,
“cluster_uuid” : “yPTKgWIhRaiUBrgTmciELQ”,
“version” : {
“number” : “6.2.2”,
“build _hash” : “10b1edd”,
“build_date” : “2018-02-16T19:01:30.685723Z”,
“build_snapshot” : false,
“lucene_version” : “7.2.1”,
“minimum_wire_compatibility_version” : “5.6.0”,
“minimum_index_compatibility_version” : “5.0.0”
},
“tagline” : “You Know, for Search”
}

 Notice three environment variables passed to the container: discovery.type=single-node, http.cors.enabled=true and http.cors.allow-origin=*. The first setting tells Elasticsearch server to run as a standalone node and to elect itself as master and not to join any clusters. This is sufficient for our example and most testing purposes. The second parameter enables or disables cross-origin resource sharing, i.e. whether a request from another origin can be executed against Elasticsearch. And third should only be used for testing purposes, to allow all origins. Setting up latter two parameters as shown, enables us to use elastic search head , which is a handy little tool to monitor nodes and shards, and query Elasticsearch indices.

JEST

Jest is a lightweight Java client, both in terms of jar size and memory consumption. It implements Elasticsearch REST API using Apache HttpComponents library.

To use Jest in your project add the following dependency in pom.xml for Maven:

<dependency>
<groupId>io.searchbox</groupstId>
<artifactId>jest</artifactId>
<version>5.3.3</version>
</dependency>

Or for Gradle projects in your build.gradle:

compile(‘io.searchbox:jest:5.3.3’)

Configuring Jest client is easy, for e.g.:

final JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(new HttpClientConfig
.Builder(“127.0.0.1”)
.multiThreaded(true)
.build());
return factory.getObject();

Besides being lightweight, other really nice feature that Jest provides is using Java Beans directly for indexing and searching. For example, if we have POJO named Comment, it could be indexed in following manner:

Builder bulkIndexBuilder = new Bulk.Builder();
comments.stream().forEach(c → bulkIndexBuilder.addAction(new Index.Builder(c).index(“comment”).
type(“comment”).build()));
client.execute(bulkIndexBuilder
.build());

And then query it, and retrieving all hits:

List<Hit<Comment, Void>> hits = result.getHits(Comment .class);

If we want more sophisticated search, we really have two options available. First is to build string query directly, like so:

String query = “{“match” : {“message” : “Test it”}}”;

We could see how this way of building search queries can be hard for more complex examples. This option is also not flexible, extensible and fairly hard to maintain in the long run.

Another option is to include Elasticsearch dependency on the classpath and to use its query builders. It gives cleaner, more flexible and maintainable code, but we are losing on the lightweight side of the things.

Summary

Overall, Jest offers a fairly good range of functionalities in lightweight packaging, and in combination with Elasticsearch dependency on the classpath can be considered for a wide range of use cases.

In the next part of the series, we are going to explore Spring data elasticsearch client.

Links and references:

About the author:

Dragan Torbica is a Software Engineer with 7 years of experience mostly in Java and Spring. He believes that software is not something that can be manufactured nor can it be delivered faster by merely adding more lines of code and that writing good software requires skill and careful attention. Currently, he’s working for BrightMarbles.io as Senior Software Developer and Technical Team Lead.

Comments16

Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsa iste inventore rem Community Guidelines.

by Simon & Garfunkel
23 0 reply 3 days ago
Cras sit amet nibh libero, in gravida nulla. Nulla vel metus scelerisque ante sollicitudin. Cras purus odio, vestibulum in vulputate at, tempus viverra turpis. Fusce condimentum nunc ac nisi vulputate fringilla. Donec lacinia congue felis in faucibus. Lorem ipsum, dolor sit amet consectetur adipisicing elit. Fugiat a voluptatum voluptatibus ducimus mollitia hic quos, ad enim odit dolor architecto tempore, dolores libero perspiciatis, sapiente officia non incidunt doloremque?
by Simon & Garfunkel
23 0 reply 3 days ago
Cras sit amet nibh libero, in gravida nulla. Nulla vel metus scelerisque ante sollicitudin. Cras purus odio, vestibulum in vulputate at, tempus viverra turpis. Fusce condimentum nunc ac nisi vulputate fringilla. Donec lacinia congue felis in faucibus. Lorem ipsum, dolor sit amet consectetur adipisicing elit. Fugiat a voluptatum voluptatibus ducimus mollitia hic quos, ad enim odit dolor architecto tempore, dolores libero perspiciatis, sapiente officia non incidunt doloremque?
by Simon & Garfunkel
23 0 reply 3 days ago
Cras sit amet nibh libero, in gravida nulla. Nulla vel metus scelerisque ante sollicitudin. Cras purus odio, vestibulum in vulputate at, tempus viverra turpis. Fusce condimentum nunc ac nisi vulputate fringilla. Donec lacinia congue felis in faucibus. Lorem ipsum, dolor sit amet consectetur adipisicing elit. Fugiat a voluptatum voluptatibus ducimus mollitia hic quos, ad enim odit dolor architecto tempore, dolores libero perspiciatis, sapiente officia non incidunt doloremque?
by Simon & Garfunkel
23 0 reply 3 days ago
Cras sit amet nibh libero, in gravida nulla. Nulla vel metus scelerisque ante sollicitudin. Cras purus odio, vestibulum in vulputate at, tempus viverra turpis. Fusce condimentum nunc ac nisi vulputate fringilla. Donec lacinia congue felis in faucibus. Lorem ipsum, dolor sit amet consectetur adipisicing elit. Fugiat a voluptatum voluptatibus ducimus mollitia hic quos, ad enim odit dolor architecto tempore, dolores libero perspiciatis, sapiente officia non incidunt doloremque?