I am attempting to build the REDHAWK CF from source on a Fedora24 machine. I've hit a few barriers, and am hoping folks can provide guidance on the following two issues:
1) I cloned the RedhawkSDR/redhawk
repository, and am attempting to build the CF in redhawk.git/redhawk/core/src
. F24 ships with GCC6, and based on the listed supported platforms (RHEL / CentOS 6-series), I'm guessing this is a bit ahead of what the upstream is testing against. At the time of my clone, core
was at cfea23b
which is tagged as v2.0.1.
In order to get it to build, I've had to make four changes. The latter two of them I believe to be GCC6-required changes (i.e., gnu-c++14), but I think the top two would be required regardless of the C++ standard in use. That said, these lines were last touched in Feb 2016 - hence my concern. My bet is that these would have been caught by now if they were actual errors, which leads me to believe I'm doing something wrong.
My changeset is below:
diff --git a/src/control/framework/nodebooter.cpp b/src/control/framework/nodebooter.cpp
index d79c291..dbd97ad 100644
--- a/src/control/framework/nodebooter.cpp
+++ b/src/control/framework/nodebooter.cpp
@@ -141,7 +141,7 @@ void loadPRFExecParams (const std::string& prfFile, ExecParams& execParams)
prf.load(prfStream);
} catch (const ossie::parser_error& ex) {
std::string parser_error_line = ossie::retrieveParserErrorLineNumber(ex.what());
- LOG_ERROR(nodebooter, "Failed to parse PRF file " << prfStream<< ". " << parser_error_line << "The XML parser returned the following error: " << ex.what());
+ LOG_ERROR(nodebooter, "Failed to parse PRF file " << prfFile<< ". " << parser_error_line << "The XML parser returned the following error: " << ex.what());
exit(EXIT_FAILURE);
}
prfStream.close();
diff --git a/src/control/sdr/dommgr/ApplicationFactory_impl.cpp b/src/control/sdr/dommgr/ApplicationFactory_impl.cpp
index d370519..92699e0 100644
--- a/src/control/sdr/dommgr/ApplicationFactory_impl.cpp
+++ b/src/control/sdr/dommgr/ApplicationFactory_impl.cpp
@@ -194,7 +194,7 @@ void ApplicationFactory_impl::ValidateSPD(CF::FileManager_ptr fileMgr,
const bool require_prf,
const bool require_scd) {
SoftPkg pkg;
- ValidateSPD(fileMgr, pkg, false, false );
+ ValidateSPD(fileMgr, pkg, sfw_profile, require_prf, require_scd);
}
void ApplicationFactory_impl::ValidateSPD(CF::FileManager_ptr fileMgr,
diff --git a/src/control/sdr/dommgr/applicationSupport.cpp b/src/control/sdr/dommgr/applicationSupport.cpp
index 1daa7ce..fbb5ac8 100644
--- a/src/control/sdr/dommgr/applicationSupport.cpp
+++ b/src/control/sdr/dommgr/applicationSupport.cpp
@@ -853,7 +853,7 @@ const bool ComponentInfo::isScaCompliant()
bool ComponentInfo::isAssignedToDevice() const
{
- return assignedDevice;
+ return static_cast<bool>(assignedDevice);
}
bool ComponentInfo::checkStruct(CF::Properties &props)
diff --git a/src/testing/sdr/dev/devices/CppTestDevice/cpp/CppTestDevice.h b/src/testing/sdr/dev/devices/CppTestDevice/cpp/CppTestDevice.h
index 8e1c396..af71c53 100644
--- a/src/testing/sdr/dev/devices/CppTestDevice/cpp/CppTestDevice.h
+++ b/src/testing/sdr/dev/devices/CppTestDevice/cpp/CppTestDevice.h
@@ -28,7 +28,7 @@ class CppTestDevice_i : public CppTestDevice_base
{
ENABLE_LOGGING
public:
- static const float MAX_LOAD = 4.0;
+ static constexpr float MAX_LOAD = 4.0;
CppTestDevice_i(char *devMgr_ior, char *id, char *lbl, char *sftwrPrfl);
CppTestDevice_i(char *devMgr_ior, char *id, char *lbl, char *sftwrPrfl, char *compDev);
2) With this patch, I am able to build the CF. Unfortunately, running make test
fails with the following:
cd testing; ./runtests.py
Searching for files in tests/ with prefix test_*.py
Creating the Test Domain
bhilburn22299
R U N N I N G T E S T S
SDRROOT: /home/bhilburn/src/redhawk.git/redhawk/core/src/testing/sdr
Loading module tests/test_00_PythonFramework.py
LOADING
Loading module tests/test_00_PythonUtils.py
LOADING
Loading module tests/test_00_ValidateTestDomain.py
LOADING
Loading module tests/test_01_DeviceManager.py
LOADING
Traceback (most recent call last):
File "./runtests.py", line 231, in <module>
suite = TestCollector(files, testMethodPrefix=options.prefix, prompt=options.prompt)
File "./runtests.py", line 112, in __init__
self.loadTests()
File "./runtests.py", line 129, in loadTests
self.addTest(loader.loadTestsFromTestCase(candidate))
File "./runtests.py", line 104, in loadTestsFromTestCase
return self.suiteClass(map(testCaseClass, testCaseNames))
File "/home/bhilburn/src/redhawk.git/redhawk/core/src/testing/_unitTestHelpers/scatest.py", line 328, in __init__
self._root = self._ns._narrow(CosNaming.NamingContext)
File "/usr/lib/python2.7/site-packages/omniORB/CORBA.py", line 585, in _narrow
return self._obj.narrow(repoId, 1)
omniORB.CORBA.TRANSIENT: CORBA.TRANSIENT(omniORB.TRANSIENT_ConnectFailed, CORBA.COMPLETED_NO)
Makefile:1023: recipe for target 'test' failed
make: *** [test] Error 1
Based on the Installation Instructions, my bet is that omniORB is not configured correctly. I'm having trouble figuring out what exactly I'm missing, though. I have made the change to /etc/omniORB.cfg
described in the installation instructions, but when I try to manually invoke cleanomni
I see this:
sh: /etc/init.d/omniNames: No such file or directory
My guess is that when the CF is installed through RPMs, some additional configuration steps are done that aren't described in the installation instructions. Are there any docs that break these out into more detail?
I also had to patch the framework in the same spots as you suggested and took the same approach as you. I also had to configure with the following:
CXXFLAGS='-g -O2 -fpermissive' ./configure --disable-log4cxx
as I did not have log4cxx and a dnf search did not find it.
As for the
make test
call, you'll need to have omniNames running for those to execute I believe. You should be able tosudo systemctl start omniNames.service
if you have omniORB-servers installed. I was running in a docker image so I started it directly with:/usr/bin/omniNames -start -always -logdir /var/log/omniORB/ -errlog /var/log/omniORB/error.log
since I believe I would need to do some extra work to get systemctl playing nice in docker. After this the unit tests should run. Although I also performed a
make install
and sourced the profile.d scripts in $OSSIEHOME/etc/profile.d. Of course, I do not have omniEvents installed so mine are riddled with:ERROR:DomainManager - Service unvailable, Unable to create event channel: IDM_Channel
Regarding the cleanomni script, this script is very system dependent and I believe only works on CentOS6. It should, stop the omni services, remove the "logs" (which act more like persistence), then restart them. Depending on how omniNames is started / compiled the logging directory may be different per system. I usually just clean it by hand or have a bash script to stop/clean/start the services.
After a long wait my unit tests did finish with the following results:
Ran 498 tests in 1587.850s FAILED (failures=11, errors=35)
This is likely not a good assessment though; some of the tests depend on omniEvents, some depend on libraries like valgrind being installed, some require BULKIO etc. So your millage may vary on the test results and may require addition inspection to see if the failures are legit or not.