Does anyone know how to run a DDL script to create a Cassandra database via Jenkins? I'm trying to connect to Cassandra through Jenkins in a testing environment in order to upload a test baseline dataset and run integration tests against it.
问题:
回答1:
I created my own solution to solve a similar issue. Not just for testing, but for applying scripts in order as changes occur to the schema over time. It will work under Jenkins or wherever. There's a class to spin through the list of scripts in order, opening each as an input stream. That class then invokes the execute() method on this class:
package org.makeyourcase.persistence;
import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import java.util.List;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.exceptions.SyntaxError;
import org.apache.commons.io.IOUtils;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
public class CqlFileRunner {
private static final Logger LOG = Logger.getLogger(CqlFileRunner.class);
@Value("${cassandra.node}")
private String node;
@Value("${cassandra.keyspace}")
private String keyspace;
@Autowired
private CassandraClusterBuilderMaker cassandraClusterBuilderMaker;
public void execute(InputStream commandStream) throws IOException {
byte[] commandBuffer = new byte[commandStream.available()];
IOUtils.readFully(commandStream, commandBuffer);
Cluster cluster = cassandraClusterBuilderMaker.create().addContactPoint(node).build();
Session session = cluster.connect(keyspace);
List<String> commands = Arrays.asList(new String(commandBuffer, "UTF-8").split(";"));
for(String command : commands){
if(!command.trim().isEmpty()){
command = command.trim() + ";";
LOG.info("Execute:\n" + command);
try {
session.execute(command);
} catch (SyntaxError e) {
LOG.error("Command failed with " + e.getMessage());
throw e;
}
}
}
}
}
Given this, you can run "cql scripts" to create tables and load data. It's good for small-volume stuff, but probably too slow for anything big. A script might look like:
CREATE TABLE access_tiers (
level bigint PRIMARY KEY,
role text
);
ALTER TABLE access_tiers WITH caching = 'all' AND compression = {'sstable_compression' : ''};
INSERT INTO access_tiers (level, role) VALUES (200, 'user_tier2');
INSERT INTO access_tiers (level, role) VALUES (1000, 'user_tier3');
INSERT INTO access_tiers (level, role) VALUES (5000, 'user_tier4');
INSERT INTO access_tiers (level, role) VALUES (10000, 'user_tier5');
INSERT INTO access_tiers (level, role) VALUES (20000, 'user_tier6');
INSERT INTO access_tiers (level, role) VALUES (50000, 'moderator');
Edit:
Since this original post, I've extracted the Java versioning component that I'm using for my project. I also created a small sample project that shows how to integrate it. It's bare-bones. There are different approaches to this problem, so I picked one that was simple to build and does what I need. Here are the two github projects:
https://github.com/DonBranson/cql_schema_versioning
https://github.com/DonBranson/cql_schema_versioning_example
回答2:
What about sticking your DDL statements into an @org.junit.Before
annotated method and probably cleaning up in @org.junit.After
(assuming you are using JUnit)?
IMHO tests should be fully self contained (if possible) — needing to run some manual step beforehand is not a good practice (schema changes, you on a new machine or someone else needs to run the tests for the first time,...)