blog:2020:0418_jit_compiler_part4_crt_dependency

JIT Compiler with LLVM - Part 4 - CRT dependency

In the (previous article on this topic) we have seen how to fix the IR module optimization crashes or memory leaks that were due to some incorrect build setup. Now, in this post I want to discuss the issues I faced when first trying a slighty more avanced JIT compilation: the basic idea was to verify that we could indeed rely on external libraries from our JIT compiled code, which is obviously a key feature. And this prooved to be far less easy than I thought it would be…

If you read this section of the page on the ORC v2 design and implementation, you will understand that we should normally be able to find “in process symbols” from our JIT compiled code if we use the DynamicLibrarySearchGenerator::GetForCurrentProcess(…) generator for a given JIT library.

In fact, this is what I initially did by default for the main library of the NervJIT compiler:

Thus I updated my test project with some new “toy function”:

int my_function(int a, int b)
{
return a*b;
}

int main(int argc, char *argv[])
{
// The regular code here to init JIT.

auto func = (int(*)())jit->lookup("test_function");
CHECK_RET(func!=nullptr,1,"Invalid test_function pointer.");
DEBUG_MSG("test_function() result: "<<func());

// cleaning code here.
}

And I was then using the following content for the test5.cxx script:

int my_function(int, int);

extern "C" int test_function() {
return my_function(4,5);
}

And of course, this didn't work [hey, would have been too easy, wouldn't it?…] and produced the following error message:

[ERROR]: LLVM error: Failed to materialize symbols: { (main, { test_function }) }

So, for some reason, the JIT compiler couldn't link to my process defined “my_function” function… So this got me thinking: maybe that's because, my test application is just a regular application after all, so when I compile it, my compiler will see that function definition, but then, since I'm not actually using it anywhere in my code, the compiler will probably find that this is a good oportunity to optimize the code and remove this function completely!

Thus, my first idea was to explicitly tell the compiler “I might need to access this function symbol externally some day, so you should not remove it!”… And on windows/msvc, usually you do that with the __declspec(dllexport), so I updated the code with:

__declspec(dllexport) int my_function(int a, int b)
{
return a*b;
}

And this time, this worked ⇒ it seems the function symbol was resolved properly from my test app process itself, and I got the expected result:

[DEBUG]: test_function() result: 20

⇒ But this got me wonder anyway: do I really need to “export” the function ? Or would it be enough to ensure that the function is defined somewhere in the executable (assuming this could make sense without “exporting it”, I'm not quite sure about that lol) ? So I built the following test code to try to clarify this point:

// __declspec(dllexport)
int my_function(int a, int b)
{
return a*b;
}

int main(int argc, char *argv[])
{
// The regular code here to init JIT.

int res = my_function((int)getenv("v1"),(int)getenv("v2"));
DEBUG_MSG("The dummy value is: "<<res);

auto func = (int(*)())jit->lookup("test_function");
CHECK_RET(func!=nullptr,1,"Invalid test_function pointer.");
DEBUG_MSG("test_function() result: "<<func());

// cleaning code here.
}

With this construct, the JIT couldn't find “my_function”, and, since I don't think the compiler would be able to optimize my function call away in that case, I'd say it seems that the function must really be exported to be found [arrf, OK, fair enough.]

Anyway, now it's time to move to the more interesting part with dependencies on additional dynamic libraries…

Once again, the idea here was still simple: I was trying to get a very simple/minimal dependency on my nvCore library in my JIT code, something along those lines in a test6.cxx file:

#include <core_common.h>
#include <NervApp.h>

using namespace nv;

extern "C" int showRootPath() {
auto& app = NervApp::instance();
String path = app.getRootPath();
logDEBUG("The nervApp root path is: "<<path);
return 5;
}

Unfortunately, that part went significantly less smoothly than the previous one .

First I add to update the include search paths, of course, so in my test app I was using:

DEBUG_MSG("Retrieving root path function.");
auto show = (int(*)())jit->lookup("showRootPath");
CHECK_RET(show, 1, "cannot retrieve showRootPath function.");
DEBUG_MSG("Displaying root path here:");
int val = show();
DEBUG_MSG("Result value is: "<<val);

As show in the code above I also introduced the support function I just added in my NervJIT class to be able to provide preprocessor macro definitions.

Of course, this couldn't just work without providing the nvCore symbols for link stage. So I also provided support to add dynamic libraries dependency:

{
char prefix = impl->lljit->getDataLayout().getGlobalPrefix();
}

But… of course, this was just not working at all , I was always getting errors when trying to lookup the “showRootPath” function, such as:

JIT session error: Symbols not found: [ terminate, ?_Facet_Register@std@@YAXPEAV_Facet_base@1@@Z, _invalid_parameter_noinfo_noreturn, ??_7type_info@@6B@, ??3@YAXPEAX_K@Z ]

And at start I really had no idea where this could come from… I searched for the ??3@YAXPEAX_K@Z symbol online and found that this was most probably an operator delete() function, but this didn't help much.

• I tried removing the DynamicLibrarySearchGenerator::GetForCurrentProcess call thinking maybe there was some kind of conflict with the process symbols,
• I tried adding a lot of windows modules manually (I used DependenciesGui to figure out what were the dependencies inside my nvCore module), so I ended with things like this:

At some point I noticed that if I added the “uuid.dll” library, then I would rather get a silent crash instead of a lookup error message, but that was basically all

… And still, I was always missing at least 3 symbols (?_Facet_Register@std@@YAXPEAV_Facet_base@1@@Z, ??_7type_info@@6B@ and ??3@YAXPEAX_K@Z) And I couldn't fine anything helpful on this topic online: really desperating…

As usual, when anything gets “out of control”, it's a good idea to try to step out, and change your perspective, and try something simpler… So I decide I should try the following JIT code:

#include <iostream>

extern "C" int showRootPath() {
std::cout << "Hello from function!" << std::endl;
return 5;
}

⇒ No dependency on nvCore in there, but still, I was getting the same missing symbol with that script. So I went one step further, simply building a minimal test app with this content:

#include <iostream>

int main() {
std::cout << "Hello from function!" << std::endl;
return 0;
}

Compiling this code with MSVC on one side, and with clang++ on the other side: both produced similar executable files (but not exactly the same size), and both executable only had a dependency on the kernel32.dll module.

But there was something surprising here: this is the command line I used to perform the compilation with clang++:

clang++ -Wall -std=c++17 W:\Projects\NervSeed\tests\test_hello_world\main.cpp -o W:\Projects\NervSeed\dist\bin\msvc64\test_hello_world_clang.exe

⇒ What I found surprising here was that… this was just working out of the box [Yeah… I know lol… most people would rather consider this the “non-surprising” part ]. So I started to wonder: how could the clang++ app just work out of the box, when, in my JIT compiler, I need to specify module command line arguments, include search path, and other “compiler invocation” settings to get those simple lines to compile ? ⇒ clang must be “configuring” everything automatically under the hood!

And here comes the handy -v command line argument! And again, this tidy discovery saved my day [or well… “almost” saved my day]:

clang -x c++ -v -Wall -std=c++17 W:\Projects\NervSeed\tests\test_hello_world\main.cpp -o W:\Projects\NervSeed\dist\bin\msvc64\test_hello_world_clang.exe
clang version 11.0.0 (ssh://git@gitlab.nervtech.org:22002/nerv/NervSeed.git 8512fe463218bc327ae31fb76b8eb2e0fc894c25)
Target: x86_64-pc-windows-msvc
InstalledDir: W:\Projects\NervSeed\deps\msvc64\llvm-20200409\bin
"W:\\Projects\\NervSeed\\deps\\msvc64\\llvm-20200409\\bin\\clang.exe" -cc1 -triple x86_64-pc-windows-msvc19.16.27030 -emit-obj -mrelax-all -mincremental-linker-compatible -disable-free -disable-llvm-verifier -discard-value-names -main-file-name main.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mframe-pointer=none -fmath-errno -fno-rounding-math -mconstructor-aliases -munwind-tables -target-cpu x86-64 -dwarf-column-info -v -resource-dir "W:\\Projects\\NervSeed\\deps\\msvc64\\llvm-20200409\\lib\\clang\\11.0.0" -internal-isystem "W:\\Projects\\NervSeed\\deps\\msvc64\\llvm-20200409\\lib\\clang\\11.0.0\\include" -internal-isystem "D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\include" -internal-isystem "D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\atlmfc\\include" -internal-isystem "C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\ucrt" -internal-isystem "C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.18362.0\\shared" -internal-isystem "C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.18362.0\\um" -internal-isystem "C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.18362.0\\winrt" -Wall -std=c++17 -fdeprecated-macro -fdebug-compilation-dir "W:\\Projects\\NervSeed\\deps\\msvc64\\llvm-20200409\\bin" -ferror-limit 19 -fmessage-length=120 -fno-use-cxa-atexit -fms-extensions -fms-compatibility -fms-compatibility-version=19.16.27030 -fdelayed-template-parsing -fcxx-exceptions -fexceptions -fcolor-diagnostics -faddrsig -o "C:\\Users\\ultim\\AppData\\Local\\Temp\\main-03e9a9.o" -x c++ "W:\\Projects\\NervSeed\\tests\\test_hello_world\\main.cpp"
clang -cc1 version 11.0.0 based upon LLVM 11.0.0git default target x86_64-pc-windows-msvc
#include "..." search starts here:
#include <...> search starts here:
W:\Projects\NervSeed\deps\msvc64\llvm-20200409\lib\clang\11.0.0\include
D:\Apps\VisualStudio2017_CE\VC\Tools\MSVC\14.16.27023\include
D:\Apps\VisualStudio2017_CE\VC\Tools\MSVC\14.16.27023\atlmfc\include
C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\ucrt
C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared
C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um
C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt
End of search list.
"D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\bin\\Hostx64\\x64\\link.exe" "-out:W:\\Projects\\NervSeed\\dist\\bin\\msvc64\\test_hello_world_clang.exe" -defaultlib:libcmt "-libpath:D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\lib\\x64" "-libpath:D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\atlmfc\\lib\\x64" "-libpath:C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.18362.0\\ucrt\\x64" "-libpath:C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.18362.0\\um\\x64" -nologo "C:\\Users\\ultim\\AppData\\Local\\Temp\\main-03e9a9.o"

So when you ask clang for verbose outputs, well, it does provide verbose outputs And that's great. From there you can see all the parameters/settings/header paths that are used automatically by default. Of course, I updated the NervJIT/test app accordingly, and from then I was setting up my default compiler invocation configuration as follow:

compilerInstance = std::make_unique<clang::CompilerInstance>();
auto& compilerInvocation = compilerInstance->getInvocation();

std::stringstream ss;
// ss << "-triple=" << llvm::sys::getDefaultTargetTriple();
ss << "-triple=x86_64-pc-windows-msvc19.16.27030";

DEBUG_MSG("Using triple value: "<<ss.str());

std::vector<const char*> itemcstrs;
std::vector<std::string> itemstrs;
itemstrs.push_back(ss.str());

// cf. https://clang.llvm.org/docs/MSVCCompatibility.html
// cf. https://stackoverflow.com/questions/34531071/clang-cl-on-windows-8-1-compiling-error
itemstrs.push_back("-x");
itemstrs.push_back("c++");
itemstrs.push_back("-mrelax-all");
itemstrs.push_back("-disable-free");
itemstrs.push_back("-mrelocation-model");
itemstrs.push_back("pic");
itemstrs.push_back("-pic-level");
itemstrs.push_back("2");
itemstrs.push_back("posix");
itemstrs.push_back("-mframe-pointer=none");
itemstrs.push_back("-fmath-errno");
itemstrs.push_back("-fno-rounding-math");
itemstrs.push_back("-mconstructor-aliases");
itemstrs.push_back("-munwind-tables");
itemstrs.push_back("-target-cpu");
itemstrs.push_back("x86-64");
itemstrs.push_back("-dwarf-column-info");
//   -disable-llvm-verifier  -main-file-name main.cpp
itemstrs.push_back("-Wall");
itemstrs.push_back("-std=c++17");
itemstrs.push_back("-fdeprecated-macro");
itemstrs.push_back("-ferror-limit");
itemstrs.push_back("19");
itemstrs.push_back("-fmessage-length=120");
itemstrs.push_back("-fno-use-cxa-atexit");
// -fdebug-compilation-dir "W:\\Projects\\NervSeed\\deps\\msvc64\\llvm-20200409\\bin"
itemstrs.push_back("-fms-extensions");
itemstrs.push_back("-fms-compatibility");
itemstrs.push_back("-fms-compatibility-version=19.16.27030");
itemstrs.push_back("-fdelayed-template-parsing");
itemstrs.push_back("-fcxx-exceptions");
itemstrs.push_back("-fexceptions");
itemstrs.push_back("-fcolor-diagnostics");

for (unsigned idx = 0; idx < itemstrs.size(); idx++) {
// note: if itemstrs is modified after this, itemcstrs will be full
// of invalid pointers! Could make copies, but would have to clean up then...
itemcstrs.push_back(itemstrs[idx].c_str());
}

clang::CompilerInvocation::CreateFromArgs(compilerInvocation, llvm::ArrayRef<const char*>(itemcstrs.data(), itemcstrs.size()), *diagnosticsEngine.get());

Yet… unfortunately, even with all those changes, my simple “std::cout” test script was still not working… still the same missing symbols! :-S

Then, I focused my attention on that part:

"D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\bin\\Hostx64\\x64\\link.exe" "-out:W:\\Projects\\NervSeed\\dist\\bin\\msvc64\\test_hello_world_clang.exe" -defaultlib:libcmt "-libpath:D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\lib\\x64" "-libpath:D:\\Apps\\VisualStudio2017_CE\\VC\\Tools\\MSVC\\14.16.27023\\atlmfc\\lib\\x64" "-libpath:C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.18362.0\\ucrt\\x64" "-libpath:C:\\Program Files (x86)\\Windows Kits\\10\\Lib\\10.0.18362.0\\um\\x64" -nologo "C:\\Users\\ultim\\AppData\\Local\\Temp\\main-03e9a9.o"

⇒ This means that by default, when building a small executable, clang will ask the MSVC link.exe app to link against the libcmt library: which is the static C runtime library. But the point is, we don't have this “linking” stage ourself in your JIT compilation: we can only retrieve “external” symbols from the process itself or from additional dynamic libraries [at least, as far as I understand]. So, one could expect that the JIT would rather link againt the dynamic C runtime symbols then… and I thought that was really what I was doing already anyway since I provided:

But somehow it seems this was not enough ? And then, I found this page on stackoverflow: https://stackoverflow.com/questions/41850296/link-dynamic-c-runtime-with-clang-windows:

The way I've found you can is by adding the options **-Wl,-nodefaultlib:libcmt -D_DLL -lmsvcrt** to override the default. However, this seems quite awkward. Is there a better way of linking the dynamic runtime than this?

⇒ So, do you see where this is leading us ? YES, Right! All I was really missing from the beginning was this _DLL macro preprocessor! Of course, I wasn't really convinced, but I updated my test code to:

And then, suddenly, no more missing symbols and the expected output!:

[DEBUG]: Retrieving root path function.
[DEBUG]: Searching symbol for showRootPath
[DEBUG]: Displaying root path here:
Hello from function!
[DEBUG]: Result value is: 5

Victory!

After so much pain already of course, I was convinced everything should now be alright, I would easily build and run my minimal nvCore usage script… [Oh boy… I really had no idea how wrong I was on this point lol…]

So I went back to my “growing” nvCore test script:

#include <iostream>
#include <core_common.h>
#include <NervApp.h>
#include <sstream>

// #include <vector>

using namespace nv;

// int my_function(int, int);

extern "C" int showRootPath() {
std::cout << "Hello from function!" << std::endl;
{
auto& app = NervApp::instance();
String path = app.getRootPath();
std::cout << "Root path is: "<<path<< std::endl;
std::ostringstream os;
os << "(stringstream) Root path is: "<<path<<std::endl;
std::cout << os.str();

// nv::LogRecord().GetStream(LogManager::DEBUG0, "file", 0, "");
// .GetStream(LogManager::DEBUG0, __FILE__, __LINE__, "") << "Root path: "<<path;

logDEBUG("The nervApp root path is: "<<path);
// logDEBUG("Hello world!");
}

int* val = new int();
*val = 6;
std::cout<<"My int value is: "<<(*val)<<std::endl;
delete val;

NervApp::destroy();
MemoryManager::destroy();
return 5;
// return my_function(4,5);
}

And this just failed again sniff… still some missing symbols that looked similar to the previous ones [?_Facet_Register@std@@YAXPEAV_Facet_base@1@@Z, _invalid_parameter_noinfo_noreturn, ??_7type_info@@6B@, ??3@YAXPEAX_K@Z]. So I was really not understanding what was happening here. But with some tweaking on that script I realized that if would actually work and setup/destroy my “NervApp” instance appropriately IIF I do not try to output anything on console output either with an std::ostringstream object or my special logDEBUG() macro (which is using an ostringstream object under the hood).

So back to a more simple test:

#include <iostream>
#include <sstream>
#include <vector>

int my_func()
{
try {
// std::ostringstream os;
// os << "stringstream test!";
// // std::cout << os.str() << std::endl;
std::vector<int> vec;
vec.push_back(1);
vec.push_back(2);
vec.push_back(3);
vec.push_back(0);
return vec.size();
}
catch(...) {
std::cout <<"An exception occured!"<<std::endl;
}
return 3;
}

extern "C" int test() {
// std::vector<int> vec;
// vec.push_back(3);
std::cout << "Hello from test function!" << std::endl;
// os << "stringstream test!";
return my_func();
// return 7;
}

As one might guess from the content of the test script just above, I also quickly realised that I also couldn't use any std::vector and I guess, other STL helpers…

⇒ At that point, I tried so many constructs/dynamic libraries loading/ macro preprocessors flags, that I just lost the count, I wouldn't not be able to report all the mistakes I made here anyway So let's make it short then and head directly to what I think were the important points leading to the appropriate solution [spoiler alert: Because yes, indeed, I eventually found a proper solution… or at least that's what I believe for the moment ]:

1. Correct Dynamic C runtime linkage macros

• As previously discussed the preprocessor definition of _DLL is required to get your C++ code to link agains the dynamic C runtime instead of the static C runtime: so we really want to keep that one when compiling code with clang to IR module [because, so far, I didn't find way to ask our LLJIT engine to “link” against static libraries when loading a module (but I agree this sounds like something that should be possible to do… to be investigated one day maybe.)]
• At some point, I also thought the the preprocessor definition of _MT was required too (to ask the compiler to link against the microsoft “multithreaded runtime version” (?)), but in the end, this doesn't really seem to make any difference: I could remove that macro definition and still get my “C++ scripts” [oh I like the sound of that ] to compile and run properly, so I'm not using it for the moment.

2. Correct Dynamic C runtime architecture

• Another serious point that I'm not sure to fully understand here is on the version of the runtimes that you use: If you built a simple x64 dll linking to the dynamic C runtime for instance (on a Windows 10 x64 OS I mean), then if you check the dependency of that DLL either with DependenciesGui or Dependency Walker; both will tell you that you have a dependency on (if you display the “fullpaths”)
• C:\Windows\system32\vcruntime140.dll
• C:\Windows\system32\ucrtbase.dll
• [and many other indirect dependencies, with I suppose at some point an optional link to C:\Windows\system32\msvcp140.dll]

Yet… as far as I understand these are actually the 32bit versions of those dlls! I tried to compare with the runtime dlls found in my D:\Apps\VisualStudio2017_CE\VC\Redist\MSVC\14.16.27012\x64\Microsoft.VC141.CRT folder, and the sizes of the files where different.

⇒ This would explain why I couldn't get anything to work with my JIT module as long as I was using those libraries:

So I got a copy of the runtime x64 dll files I needed into a dedicated folder inside my project and then I started to use these:

All the previous dlls were found in the D:\Apps\VisualStudio2017_CE\VC\Redist\MSVC\14.16.27012\x64\Microsoft.VC141.CRT folder, except for ucrtbase.dll, for that one I search in my C:\Windows folder, and took the file I found in C:\Windows\WinSxS\amd64_microsoft-windows-ucrt_31bf3856ad364e35_10.0.18362.387_none_016ff738ab3856ff

⇒ That step was definitely needed to get something working in the end, so you would have to be very careful about what version of those dlls your are loading in your JIT session exactly.

3. Manually adding some missing symbols

• In a perfect world, I would expect that, if I'm providing the correct dynamic C runtime libraries as described above for my JIT session, then the JIT linker should be able to find all the necessary symbols, and we should be all set already.
• Yet, for a reason I really don't get for the moment, it seems this was not enough (in my case at least): depending on if I try to create an std::vector or an std::ostringstream, or try to call some other STL method in my script, the lookup method would still complain with some missing symbols such as: [ ??3@YAXPEAX_K@Z, ??2@YAPEAX_K@Z, ??3@YAXPEAX@Z, ??_7type_info@@6B@ ]
AFAIK, those symbols listed just above are all related to the global operator delete and std::type_info class somehow… maybe there is still something to dig in that direction, but I think I spent enough time on this point already for the moment.
At first, I was trying to export those symbols directly from my test_nvLLVM application, and subsequently, also loading the symbols from the current process into my JIT session, but at the time I was testing this, I was also mixing static/dynamic C runtimes, and even got confused because the missing symbols were very similar (??3@YAXPEAX_K@Z ⇔ ??2@YAPEAX_K@Z ⇔ ??3@YAXPEAX@Z :-S) so I initially didn't get this to work, and later I chose to create a dedicated helper module just to be sure I was not messing everything again. But thinking about it, I now think it should also work just find if for isntance, I export the missing symbols directly from my nvLLVM module, and add a dependency on that one in the JIT session: to be tested/investigated later
• So I eventually built a simple minimal helper shared module that I called “llvm_helper”, that would not contain anything new and will just:
1. Link to the dynamic C runtime,
2. Explicitly re-export the symbols that LLVM couldn't find directly in the linked libraries above.

The source of that helper is thus very simple:

// we just export the symbols we need from here:

#include <vector>
#include <sstream>

And the CMakelists.txt for that sub project is simply:

SET(TARGET_NAME "llvm_helper")
SET(TARGET_DIR ".")

SET(CMAKE_CXX_FLAGS "/EHsc /MD -D_MT -D_DLL /std:c++17")

FILE(GLOB_RECURSE SOURCE_FILES "*.cpp")

ADD_LIBRARY (${TARGET_NAME} SHARED${SOURCE_FILES})
# TARGET_LINK_LIBRARIES(${TARGET_NAME}) INSTALL(TARGETS${TARGET_NAME}
RUNTIME DESTINATION ${TARGET_DIR} LIBRARY DESTINATION${TARGET_DIR})

And then I was also loading that helper module into my JIT session:

And finally with all those small changes put together, I could at last compile and run my nvCore integration script correctly! {Feeeeww… This was another hard one!]

Here is what my test_nvLLVM source file currently looks like as a result of those investigations:

#include <llvm_common.h>
#include <iostream>
#include <NervJIT.h>
// #include <NervApp.h>

#undef DEBUG_MSG
#undef ERROR_MSG
#undef THROW_MSG

#define DEBUG_MSG(msg) std::cout << "[DEBUG]: "<<msg<< std::endl;
#define ERROR_MSG(msg) std::cout << "[ERROR]: "<<msg<< std::endl;

#define THROW_MSG(msg)                                                                          \
{                                                                                           \
ERROR_MSG(msg); \
throw std::runtime_error("An exception just occured.");                                                     \
}

#define CHECK_RET(cond, ret, msg) \
if (!(cond))                  \
{                             \
THROW_MSG(msg);           \
return ret;               \
}

// cf. https://stackoverflow.com/questions/54403377/problems-enabling-rtti-in-llvm-jit-ed-code

// __declspec(dllexport)
int my_function(int a, int b)
{
return a*b;
}

int main(int argc, char *argv[])
{

// auto& app = nv::NervApp::instance();

#if 0
DEBUG_MSG("Running clang compilation...");
runClang({"W:/Projects/NervSeed/temp/test1.cxx",
"W:/Projects/NervSeed/temp/test2.cxx"});
DEBUG_MSG("Done running clang compilation.");
#else
DEBUG_MSG("Initializing LLVM...");
nv::initLLVM(argc, argv);

DEBUG_MSG("Creating NervJIT...");
auto jit = std::make_unique<nv::NervJIT>();

// int res = my_function((int)getenv("v1"),(int)getenv("v2"));
// DEBUG_MSG("The dummy value is: "<<res);
// DEBUG_MSG("_MSC_VER="<<_MSC_VER);

#if 0
// This will not really be working anymore since we are not using our "custom demangling mapping" anymore
{
return (a+b)*3;
}

int nv_sub3(int a, int b)
{
return (a-b)*3;
}
)");

typedef int(*Func)(int, int);

#endif

#if 0
auto func = (int(*)())jit->lookup("test_function");
CHECK_RET(func!=nullptr,1,"Invalid test_function pointer.");
DEBUG_MSG("test_function() result: "<<func());
#endif

#if 0

auto func = (int(*)())jit->lookup("test");
CHECK_RET(func!=nullptr,1,"Invalid test pointer.");
DEBUG_MSG("test() result: "<<func());
#endif

#if 1

// Now we load our nvCore related function:

DEBUG_MSG("Retrieving root path function.");
typedef int(*Func2)();
Func2 show = (Func2)jit->lookup("showRootPath");
CHECK_RET(show, 1, "cannot retrieve showRootPath function.");
DEBUG_MSG("Displaying root path here:");
int val = show();
DEBUG_MSG("Result value is: "<<val);

#endif

DEBUG_MSG("Destroying NervJIT...");
jit.reset();

DEBUG_MSG("Uninitializing LLVM...");
nv::uninitLLVM();
#endif

// nv::NervApp::destroy();
// nv::MemoryManager::destroy();

DEBUG_MSG("Exiting...");
return 0;
}

⇒ Not really a nice and clean reference you want to use in production sure, but hey, it's working at least! So I'll do some cleaning, but first, I'll make sure I keep everything as is on git

⇒ Anyway, as usual, here is a zip package of all the related sub projects (before cleaning !) in case someone wants to have a more careful look at it and/or use it as a template for anything.

Now hopefully, I should be able to finally move to the lua bindings and get to some more interesting “C++ scripting”, so this should be the last article on this low level LLVM JIT compiler implementation experiment… For the moment! Because there is of course still a great lot to consider on this topic (I'm thinking for instance about the LLVM module caching system) so there is a good chance I will get back to it at some point!

Meanwhile, happy hacking everyone !

• blog/2020/0418_jit_compiler_part4_crt_dependency.txt