I regularly face issues when deploying my Liquibase scripts on Oracle, because I don't have an easy way to run them before I deploy them. They always run fine on a fresh H2 DB, but when I deploy in the pipeline, I often face basic issues.
I would like to implement some quality checks on my scripts before they are deployed, typically as part of the build. Is there an easy way to do that ? For example, something basic like checking that column and table names are not too long for Oracle.. Because I don't do it very often, I tend to forget about that limit, and when I deploy on Oracle, I get a bad surprise..
Thanks !
I found a way to parse the latest Liquibase files I create, and perform some basic checks. I guess this can be extended for more advanced checks, but that's already quite nice. Here are 4 tests, using AssertJ for assertions
import liquibase.change.core.AddColumnChange;
import liquibase.change.core.CreateTableChange;
import liquibase.change.core.RenameColumnChange;
import liquibase.change.core.RenameTableChange;
import liquibase.changelog.ChangeLogParameters;
import liquibase.changelog.ChangeSet;
import liquibase.changelog.DatabaseChangeLog;
import liquibase.exception.ChangeLogParseException;
import liquibase.parser.core.yaml.YamlChangeLogParser;
import liquibase.resource.FileSystemResourceAccessor;
import org.junit.Before;
import org.junit.Test;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternResolver;
import java.io.IOException;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import java.util.stream.Stream;
import static java.util.stream.Collectors.toList;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.Assert.fail;
public class LiquibaseScriptsTest {
public static final String LIQUIBASE_FILES_LOCATION = "src/main/resources/db/changelog/changes/*.yml";
public static final int NB_LAST_FILES_TO_CHECK = 5;
private static final int ORACLE_TABLE_NAME_MAX_LENGTH = 30;
private static final int ORACLE_COLUMN_NAME_MAX_LENGTH = 30;
private List<DatabaseChangeLog> scriptsToCheck;
private final static Comparator<Resource> resourceComparator = (r1, r2) -> r1.getFilename().compareTo(r2.getFilename());
private static YamlChangeLogParser parser = new YamlChangeLogParser();
@Before
public void setUp() throws IOException {
ResourcePatternResolver resourceFinder = new PathMatchingResourcePatternResolver(this.getClass().getClassLoader());
Resource[] resources = resourceFinder.getResources("file:" + LIQUIBASE_FILES_LOCATION);
scriptsToCheck = Arrays.asList(resources).stream()
//looking only at the latest files, assuming their name is indexed
.sorted(resourceComparator.reversed())
.limit(NB_LAST_FILES_TO_CHECK)
.map(r -> toLiquibaseScript(r))
.collect(toList());
}
@Test
public void someScriptsAreChecked() {
assertThat(scriptsToCheck).as("There's no script to check - are you sure you configured the path correctly ? ").isNotEmpty();
assertThat(scriptsToCheck.size()).isLessThanOrEqualTo(NB_LAST_FILES_TO_CHECK);
}
@Test
public void tableNamesShouldBeLessThanOracleMaxSize_whenCreated() {
Stream<CreateTableChange> createTableChanges = getChangesAsStreamOf(CreateTableChange.class);
createTableChanges.forEach(tableCreationChange -> {
assertThat(tableCreationChange.getTableName().length())
.as("change " + tableCreationChange.getChangeSet().getId() + " - table name is too long - " +
tableCreationChange.getTableName() + " is " + tableCreationChange.getTableName().length() +
"char long while maximum for Oracle is " + ORACLE_TABLE_NAME_MAX_LENGTH)
.isLessThanOrEqualTo(ORACLE_TABLE_NAME_MAX_LENGTH);
});
}
@Test
public void tableNamesShouldBeLessThanOracleMaxSize_whenModified() {
Stream<RenameTableChange> renameTableChanges = getChangesAsStreamOf(RenameTableChange.class);
renameTableChanges.forEach(tableRenameChange -> {
assertThat(tableRenameChange.getNewTableName().length())
.as("change " + tableRenameChange.getChangeSet().getId() + " - table name is too long - " +
tableRenameChange.getNewTableName() + " is " + tableRenameChange.getNewTableName().length() +
"char long while maximum for Oracle is " + ORACLE_TABLE_NAME_MAX_LENGTH)
.isLessThanOrEqualTo(ORACLE_TABLE_NAME_MAX_LENGTH);
});
}
@Test
public void columnNamesShouldBeLessThanOracleMaxSize_whenCreated() {
Stream<AddColumnChange> addColumnChanges = getChangesAsStreamOf(AddColumnChange.class);
addColumnChanges.flatMap(columnCreationChanges -> columnCreationChanges.getColumns().stream())
.forEach(columnCreationChange -> {
assertThat(columnCreationChange.getName().length())
.as("column name is too long - " + columnCreationChange.getName() + " is " + columnCreationChange.getName().length() +
"char long while maximum for Oracle is " + ORACLE_COLUMN_NAME_MAX_LENGTH)
.isLessThanOrEqualTo(ORACLE_COLUMN_NAME_MAX_LENGTH);
});
}
@Test
public void columnNamesShouldBeLessThanOracleMaxSize_whenModified() {
Stream<RenameColumnChange> renameColumnChanges = getChangesAsStreamOf(RenameColumnChange.class);
renameColumnChanges.forEach(columnRenameChange -> {
assertThat(columnRenameChange.getNewColumnName().length())
.as("column name is too long - " + columnRenameChange.getNewColumnName() + " is " +
columnRenameChange.getNewColumnName().length() + "char long while maximum for Oracle is " +
ORACLE_COLUMN_NAME_MAX_LENGTH)
.isLessThanOrEqualTo(ORACLE_COLUMN_NAME_MAX_LENGTH);
});
}
private <E> Stream<E> getChangesAsStreamOf(Class E) {
Stream<Stream<ChangeSet>> changeSets = scriptsToCheck.stream().map(script -> script.getChangeSets().stream());
Stream<ChangeSet> changeSetStream = changeSets.flatMap(changeSet -> changeSet);
return changeSetStream.flatMap(changeSet -> changeSet.getChanges().stream())
.filter(E::isInstance)
.map(change -> (E) change);
}
private DatabaseChangeLog toLiquibaseScript(Resource r) {
try {
System.out.println("going to apply checks on "+r.getFilename());
return parser.parse(r.getFile().getCanonicalPath(), new ChangeLogParameters(), new FileSystemResourceAccessor());
} catch (ChangeLogParseException | IOException e) {
fail("couldn't parse Liquibase script - " + r.getFilename() + " - " + e.getMessage());
}
return null;
}
}
you can try the following batch in order to rollback the database state to date/time right before if any issue happen during update process.
of corse you can run this batch using maven during test scope
@ECHO OFF
for /F "usebackq tokens=1,2 delims==" %%i in (`wmic os get LocalDateTime /VALUE 2^>NUL`) do if '.%%i.'=='.LocalDateTime.' set ldt=%%j
set ldt=%ldt:~0,4%-%ldt:~4,2%-%ldt:~6,2%T%ldt:~8,2%:%ldt:~10,2%:%ldt:~12,2%
java -jar path/to/liquibase.jar update
IF ERRORLEVEL 0 GOTO ProcessError
:ProcessError
ECHO error while executing liquibase scripts
ECHO Rolling back to state at %ldt%
java -jar path/to/liquibase.jar rollbackToDateSQL %ldt%
i am using liquibase-core-3.3.2.jar & ojdbc5-11.1.0.7.0.jar with the following liquibase.properties file.
#liquibase.properties
driver: oracle.jdbc.OracleDriver
classpath: path/to/ojdbc.jar
url: jdbc:oracle:thin:@****:port:SID
username: OWNER
password: ****
changeLogFile:databaseChangeLog.xml
logLevel: debug
You may be ready to start looking at a commercial solution like Datical DB that uses Liquibase internally, but adds things like forecasting, which simulates applying the changes to an abstract in-memory database model. We also have a rules engine that can do very complex checks of your database and your changelog. Full disclosure - I am an employee of Datical.