Dynamic C++ compilation with LLVM & clang

In the past few days I'm been generating a lot of C++ bindings for Lua (mainly for some toy experiments on mesh generation with OpenCascade, which is really fun/interesting by the way, but that's not the topic here…). And one thing that bothered me in the end was that: I'm OK with using Lua to generate configuration or perform pre-run computations, but I would not want to use it for a continuous update loop in a game engine for instance (In fact I already tried that a long time ago, and even with LuaJIT, you quickly face some performance limits).

Instead, I want my update loop to be in pure C++, but then, if you need to set this up differently from Lua [depending on your experiment at hand], you loose a good share of the scripting advantages because you must have this update loop code somehow ready to use somewhere in your C++ world anyway. Sure you could think about a “generic loop” system where you would inject a sequence of “operations” but it's still the same problem: some where, you must have C++ defined classes or functions representing those operations if you want to call them.

So, from that point I started my journey to investigate how to generate C++ code dynamically [from Lua]: because if I could do that, then, my “lua configuration pass” could also be used to setup and build a custom C++ loop function that would be specific to each experiment I want to perform, and still allow me to keep maximum performances :-).

So here we go!

First thing I found was the Tiny C Compiler project, which looks absolutely awesome! Here is an example of what you can do with the libtcc library for instance (this is the source code from the official libtcc_test.c file):

 * Simple Test program for libtcc
 * libtcc can be useful to use tcc as a "backend" for a code generator.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

#include "libtcc.h"

/* this function is called by the generated code */
int add(int a, int b)
    return a + b;

/* this strinc is referenced by the generated code */
const char hello[] = "Hello World!";

char my_program[] =
"#include <tcclib.h>\n" /* include the "Simple libc header for TCC" */
"extern int add(int a, int b);\n"
"#ifdef _WIN32\n" /* dynamically linked data needs 'dllimport' */
" __attribute__((dllimport))\n"
"extern const char hello[];\n"
"int fib(int n)\n"
"    if (n <= 2)\n"
"        return 1;\n"
"    else\n"
"        return fib(n-1) + fib(n-2);\n"
"int foo(int n)\n"
"    printf(\"%s\\n\", hello);\n"
"    printf(\"fib(%d) = %d\\n\", n, fib(n));\n"
"    printf(\"add(%d, %d) = %d\\n\", n, 2 * n, add(n, 2 * n));\n"
"    return 0;\n"

int main(int argc, char **argv)
    TCCState *s;
    int i;
    int (*func)(int);

    s = tcc_new();
    if (!s) {
        fprintf(stderr, "Could not create tcc state\n");

    /* if tcclib.h and libtcc1.a are not installed, where can we find them */
    for (i = 1; i < argc; ++i) {
        char *a = argv[i];
        if (a[0] == '-') {
            if (a[1] == 'B')
                tcc_set_lib_path(s, a+2);
            else if (a[1] == 'I')
                tcc_add_include_path(s, a+2);
            else if (a[1] == 'L')
                tcc_add_library_path(s, a+2);

    /* MUST BE CALLED before any compilation */
    tcc_set_output_type(s, TCC_OUTPUT_MEMORY);

    if (tcc_compile_string(s, my_program) == -1)
        return 1;

    /* as a test, we add symbols that the compiled program can use.
       You may also open a dll with tcc_add_dll() and use symbols from that */
    tcc_add_symbol(s, "add", add);
    tcc_add_symbol(s, "hello", hello);

    /* relocate the code */
    if (tcc_relocate(s, TCC_RELOCATE_AUTO) < 0)
        return 1;

    /* get entry symbol */
    func = tcc_get_symbol(s, "foo");
    if (!func)
        return 1;

    /* run the code */

    /* delete the state */

    return 0;

So you see that you can compile C code, mix it with symbols that are already defined in your current process, retrieve your new C functions, etc… which is all wonderfull, but… unfortunately, this was not enough to fit the bill in my case :-( I mean, most of the modules I define/build are in C++, not C, so to be able to access them with this kind of dynamically generated code, I would need to provide a C interface for all the functions/classes I might want to “access dynamically one day”… And that sounds exactly like the initial limitation I mentioned above: I don't want to have to prepare special glue code for all the C++ elements I might want to access! Generating the lua bindings is enough pain already ;-)!

So, I decided to keep searching for another solution that would be more “C++ friendly”. And that's when I found this article: Compiling C++ Code In Memory With Clang

At first I didn't really want to go that way because clang seemed to be a giant monster to me, so I was assuming it was going to be a lot of pain to get this option on rail, but eventually, I realized there are not so many alternatives on this topic anyway, so I decided I should give it a try and see how it goes.

For the compilation stage I used the following pages as reference:

I'm on windows 10 and using Visual Studio 2017 as base compiler, so the following instructions might not really work for you if you are on a different platform

As mentioned on the reference page just above, you should first ensure that your git core.autocrlf config entry is set to false. Note that you can get the value of all your git config entries with:

git config --list

Then, first real step required is obviously to retrieve the sources, but that's really simple:

git clone

Then I setup a small batch script to perform the compilation as I want encapsulating all the details:

	REM cf.
	REM and cf.

	set flavor=%~1
	echo Building %dep_llvm% on %flavor%

	set bdir=%NV_DEPS_DIR%\build\%dep_llvm%
	mkdir "%bdir%\build"

	cd /d "%bdir%\build"
	echo LLVM/Clang build dir is: %cd%

	set idir=%NV_DEPS_DIR%\%flavor%\%dep_llvm%

	REM Python 2.7 or higher is required:
	set PATH=%NV_TOOLS_DIR%\%tool_python2%\bin;%PATH%

	REM %CMAKE% -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=%idir% -DLLVM_ENABLE_PROJECTS=clang -A x64 -Thost=x64 ..\llvm

	REM %JOM% /K /S /j 8 /NOLOGO
	REM %JOM% install
	nmake install
	echo Done building LLVM/Clang.

In the batch script just above the “flavor” value I'm using for now is the string “msvc64”, then all I do is basically the create a dedicated “build” folder, then I call cmake to generate the compilation files (I usually avoid compilation from IDEs, so I'm using the NMake Makefiles generator here)

Then I call nmake and nmake install to complete the job.

Python 2.7 is required for the cmake configuration step to be successful here, so I add it in the PATH before calling Cmake.
The “NMake makefiles” generator doesn't support the “-A x64” or “-Thost=x64” command line arguments, so I removed them… but that didn't seem to be a problem for me (I'm on a Windows x64 machine and I'm only targetting x64 architecture anyway)
First I tried the compilation using JOM instead of nmake, but that didn't seem to work out of the box for me :-( JOM was stuck not compiling anything… so I switched to nmake, not thinking too much about it, that one is working fine but boy… it's so slllllooowwwwww… :-( [compilation took about 8h for me lol] ⇒ One day, if I get a chance, I should give JOM another try I think.

And… surprisingly, after waiting a verrrryyy lonnnnnnnnnnng time, the compilation completed successfully! That part was clearly easier than I was expecting :-) !

Once I had the LLVM/Clang binaries/libraries compiled and installed in an approprited folder, I started integration into my own project, trying to build a dedicated shared library that would encapsulate the dynamic C++ code generation. I called the module nvLLVM and I started with the article from **Matthieu Brucher** mentioned above as a base.

Here are the 2 main header files I created for that module:

  • First the llvm_common.h file which serves as an export interface for me to find my test function later:
    #ifndef LLVM_COMMON_
    #define LLVM_COMMON_
    #if defined(_MSC_VER) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(__BCPLUSPLUS__) || defined(__MWERKS__)
    #if defined(NV_LIB_STATIC)
    #define NVLLVM_EXPORT
    #elif defined(NVLLVM_LIB)
    #define NVLLVM_EXPORT __declspec(dllexport)
    #define NVLLVM_EXPORT __declspec(dllimport)
    #define NVLLVM_EXPORT
    #if defined(_WIN32) && !defined(_WIN32_WINNT)
    #define _WIN32_WINNT 0x0602
    #include <string>
    NVLLVM_EXPORT void runClang(const std::string& file);
  • And then the llvm_precomp.h header which contains most of the headers required from LLVM/clang to build our test function:
    #ifndef LLVM_PRECOMP_
    #define LLVM_PRECOMP_
    #include <llvm_common.h>
    // cf.
    #pragma warning( push )
    #pragma warning( disable : 4244 ) // 'initializing': conversion from '_Ty' to '_Ty1', possible loss of data
    #pragma warning( disable : 4624 ) // destructor was implicitly defined as deleted
    #pragma warning( disable : 4141 ) // 'inline': used more than once
    #pragma warning( disable : 4291 ) // no matching operator delete found; memory will not be freed if initialization throws an exception
    #include <sstream>
    #include <llvm/InitializePasses.h>
    #include <llvm/ExecutionEngine/ExecutionEngine.h>
    #include <llvm/ExecutionEngine/MCJIT.h>
    #include <llvm/ExecutionEngine/SectionMemoryManager.h>
    #include <llvm/IR/DataLayout.h>
    #include <llvm/IR/LLVMContext.h>
    #include <llvm/IR/PassManager.h>
    #include <llvm/Passes/PassBuilder.h>
    #include <llvm/Support/MemoryBuffer.h>
    #include <llvm/Support/TargetSelect.h>
    #include <llvm/Support/TargetRegistry.h>
    #include <llvm/Support/Host.h>
    #include <llvm/Support/raw_ostream.h>
    #include "llvm/ExecutionEngine/JITSymbol.h"
    #include "llvm/ExecutionEngine/Orc/CompileUtils.h"
    #include "llvm/ExecutionEngine/Orc/Core.h"
    #include "llvm/ExecutionEngine/Orc/ExecutionUtils.h"
    #include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
    #include "llvm/ExecutionEngine/Orc/JITTargetMachineBuilder.h"
    #include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
    #include <clang/Basic/DiagnosticOptions.h>
    #include <clang/Basic/Diagnostic.h>
    #include <clang/Basic/FileManager.h>
    #include <clang/Basic/FileSystemOptions.h>
    #include <clang/Basic/LangOptions.h>
    #include <MemoryBufferCache.h>
    // #include <clang/Basic/MemoryBufferCache.h>
    #include <clang/Basic/SourceManager.h>
    #include <clang/Basic/TargetInfo.h>
    #include <clang/CodeGen/CodeGenAction.h>
    #include <clang/Frontend/CompilerInstance.h>
    #include <clang/Frontend/CompilerInvocation.h>
    #include <clang/Frontend/TextDiagnosticPrinter.h>
    #include <clang/Lex/HeaderSearch.h>
    #include <clang/Lex/HeaderSearchOptions.h>
    #include <clang/Lex/Preprocessor.h>
    #include <clang/Lex/PreprocessorOptions.h>
    #include <clang/Parse/ParseAST.h>
    #include <clang/Sema/Sema.h>
    #include <clang/AST/ASTContext.h>
    #include <clang/AST/ASTConsumer.h>
    #pragma warning( pop )

I make some changes at this level already compared to the version provided by Matthieu Brucher:

  • I disabled a bunch of warnings from the Visual Studio 2017 compiler (nothing too serious I think… or at least nothing I could do something about anyway: I'm not going to change the LLVM header files :-)!) that were polluting my compilation outputs.
  • I had to replace the include clang/Basic/MemoryBufferCache.h with a local version of that file: the LLVM version I'm using from git is version 11.0.0git [as reported by the LLVM cmake config at least, see below], in that version, the file clang/Basic/MemoryBufferCache.h doesn't exist anymore. Fortunately I was able to find the corresponding header and implementation files online, which I added in that module:
    //===- MemoryBufferCache.h - Cache for loaded memory buffers ----*- C++ -*-===//
     // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
     // See for license information.
     // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
    // cf.
     #include "llvm/ADT/IntrusiveRefCntPtr.h"
     #include "llvm/ADT/StringMap.h"
     #include <memory>
     namespace llvm {
     class MemoryBuffer;
     } // end namespace llvm
     namespace clang {
     /// Manage memory buffers across multiple users.
     /// Ensures that multiple users have a consistent view of each buffer.  This is
     /// used by \a CompilerInstance when building PCMs to ensure that each \a
     /// ModuleManager sees the same files.
     /// \a finalizeCurrentBuffers() should be called before creating a new user.
     /// This locks in the current buffers, ensuring that no buffer that has already
     /// been accessed can be purged, preventing use-after-frees.
     class MemoryBufferCache : public llvm::RefCountedBase<MemoryBufferCache> {
       struct BufferEntry {
         std::unique_ptr<llvm::MemoryBuffer> Buffer;
         /// Track the timeline of when this was added to the cache.
         unsigned Index;
       /// Cache of buffers.
       llvm::StringMap<BufferEntry> Buffers;
       /// Monotonically increasing index.
       unsigned NextIndex = 0;
       /// Bumped to prevent "older" buffers from being removed.
       unsigned FirstRemovableIndex = 0;
       /// Store the Buffer under the Filename.
       /// \pre There is not already buffer is not already in the cache.
       /// \return a reference to the buffer as a convenience.
       llvm::MemoryBuffer &addBuffer(llvm::StringRef Filename,
                                     std::unique_ptr<llvm::MemoryBuffer> Buffer);
       /// Try to remove a buffer from the cache.
       /// \return false on success, iff \c !isBufferFinal().
       bool tryToRemoveBuffer(llvm::StringRef Filename);
       /// Get a pointer to the buffer if it exists; else nullptr.
       llvm::MemoryBuffer *lookupBuffer(llvm::StringRef Filename);
       /// Check whether the buffer is final.
       /// \return true iff \a finalizeCurrentBuffers() has been called since the
       /// buffer was added.  This prevents buffers from being removed.
       bool isBufferFinal(llvm::StringRef Filename);
       /// Finalize the current buffers in the cache.
       /// Should be called when creating a new user to ensure previous uses aren't
       /// invalidated.
       void finalizeCurrentBuffers();
     } // end namespace clang

//===- MemoryBufferCache.cpp - Cache for loaded memory buffers ------------===//
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See for license information.
 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//  cf.

#include <llvm_precomp.h>
 #include <llvm/Support/MemoryBuffer.h>
 using namespace clang;
 llvm::MemoryBuffer &
 MemoryBufferCache::addBuffer(llvm::StringRef Filename,
                              std::unique_ptr<llvm::MemoryBuffer> Buffer) {
   auto Insertion =
       Buffers.insert({Filename, BufferEntry{std::move(Buffer), NextIndex++}});
   assert(Insertion.second && "Already has a buffer");
   return *Insertion.first->second.Buffer;
 llvm::MemoryBuffer *MemoryBufferCache::lookupBuffer(llvm::StringRef Filename) {
   auto I = Buffers.find(Filename);
   if (I == Buffers.end())
     return nullptr;
   return I->second.Buffer.get();
 bool MemoryBufferCache::isBufferFinal(llvm::StringRef Filename) {
   auto I = Buffers.find(Filename);
   if (I == Buffers.end())
     return false;
   return I->second.Index < FirstRemovableIndex;
 bool MemoryBufferCache::tryToRemoveBuffer(llvm::StringRef Filename) {
   auto I = Buffers.find(Filename);
   assert(I != Buffers.end() && "No buffer to remove...");
   if (I->second.Index < FirstRemovableIndex)
     return true;
   return false;
 void MemoryBufferCache::finalizeCurrentBuffers() { FirstRemovableIndex = NextIndex; }

Then comes the main implementation file where I try to reproduce the dynamic C++ compilation process:

#include <llvm_precomp.h>

#include <iostream>


bool LLVMinit = false;

#define ERROR_MSG(msg) std::cout << "[ERROR]: "<<msg<< std::endl;
#define DEBUG_MSG(msg) std::cout << "[DEBUG]: "<<msg<< std::endl;

void InitializeLLVM()
    if (LLVMinit)

    // We have not initialized any pass managers for any device yet.
    // Run the global LLVM pass initialization functions.
    auto& Registry = *llvm::PassRegistry::getPassRegistry();

    LLVMinit = true;

void runClang(const std::string& file)
    clang::IntrusiveRefCntPtr<clang::DiagnosticOptions> diagnosticOptions = new clang::DiagnosticOptions;
    // clang::DiagnosticOptions diagnosticOptions;

    std::unique_ptr<clang::TextDiagnosticPrinter> textDiagnosticPrinter = std::make_unique<clang::TextDiagnosticPrinter>(llvm::outs(), diagnosticOptions.get());
    // std:: unique_ptr <clang::DiagnosticIDs> diagIDs;
    clang::IntrusiveRefCntPtr<clang::DiagnosticIDs> diagIDs;

    clang::IntrusiveRefCntPtr<clang::DiagnosticsEngine> diagnosticsEngine = new clang::DiagnosticsEngine(diagIDs, diagnosticOptions, textDiagnosticPrinter.get());
    clang::CompilerInstance compilerInstance;
    auto& compilerInvocation = compilerInstance.getInvocation();

    std::stringstream ss;
    ss << "-triple=" << llvm::sys::getDefaultTargetTriple();
    std::istream_iterator<std::string> begin(ss);
    std::istream_iterator<std::string> end;
    std::istream_iterator<std::string> i = begin;
    std::vector<const char*> itemcstrs;
    std::vector<std::string> itemstrs;
    while (i != end) {

    for (unsigned idx = 0; idx < itemstrs.size(); idx++) {
      // note: if itemstrs is modified after this, itemcstrs will be full
      // of invalid pointers! Could make copies, but would have to clean up then...
    // clang::CompilerInvocation::CreateFromArgs(compilerInvocation,, + itemcstrs.size(), *diagnosticsEngine.release());
    clang::CompilerInvocation::CreateFromArgs(compilerInvocation, llvm::ArrayRef(, itemcstrs.size()), *diagnosticsEngine.get());

    auto* languageOptions = compilerInvocation.getLangOpts();
    auto& preprocessorOptions = compilerInvocation.getPreprocessorOpts();
    auto& targetOptions = compilerInvocation.getTargetOpts();
    auto& frontEndOptions = compilerInvocation.getFrontendOpts();
    frontEndOptions.ShowStats = true;
    auto& headerSearchOptions = compilerInvocation.getHeaderSearchOpts();
    headerSearchOptions.Verbose = true;
    auto& codeGenOptions = compilerInvocation.getCodeGenOpts();

    // llvm::StringRef filename = "W:/Projects/NervSeed/temp/test1.cxx";
    llvm::StringRef filename = file.c_str();

    frontEndOptions.Inputs.push_back(clang::FrontendInputFile(filename, clang::InputKind(clang::Language::CXX)));
    targetOptions.Triple = llvm::sys::getDefaultTargetTriple();
    compilerInstance.createDiagnostics(textDiagnosticPrinter.get(), false);
    llvm::LLVMContext context;
    std::unique_ptr<clang::CodeGenAction> action = std::make_unique<clang::EmitLLVMOnlyAction>(&context);
    if (!compilerInstance.ExecuteAction(*action))
        ERROR_MSG("Cannot execute action with compiler instance.");

    std::unique_ptr<llvm::Module> module = action->takeModule();
    if (!module)
        ERROR_MSG("Cannot retrieve IR module.");

    llvm::PassBuilder passBuilder;
    llvm::LoopAnalysisManager loopAnalysisManager(codeGenOptions.DebugPassManager);
    llvm::FunctionAnalysisManager functionAnalysisManager(codeGenOptions.DebugPassManager);
    llvm::CGSCCAnalysisManager cGSCCAnalysisManager(codeGenOptions.DebugPassManager);
    llvm::ModuleAnalysisManager moduleAnalysisManager(codeGenOptions.DebugPassManager);
    passBuilder.crossRegisterProxies(loopAnalysisManager, functionAnalysisManager, cGSCCAnalysisManager, moduleAnalysisManager);
    llvm::ModulePassManager modulePassManager = passBuilder.buildPerModuleDefaultPipeline(llvm::PassBuilder::OptimizationLevel::O3);*module, moduleAnalysisManager);

    llvm::EngineBuilder builder(std::move(module));
    std::string createErrorMsg;
    // builder.setEngineKind(llvm::EngineKind::Interpreter);

    std::string triple = llvm::sys::getDefaultTargetTriple();
    DEBUG_MSG("Using target triple: "<<triple);
    auto executionEngine = builder.create();
    if (!executionEngine)
        ERROR_MSG("Cannot create execution engine.'"<<createErrorMsg<<"'");
    DEBUG_MSG("Retrieving nv_add/nv_sub functions...");
    typedef int(*AddFunc)(int,int);
    typedef int(*SubFunc)(int,int);

    AddFunc add = reinterpret_cast<AddFunc>(executionEngine->getFunctionAddress("nv_add"));
    if(!add) {
        ERROR_MSG("Cannot retrieve Add function.");
    else {
        int res = add(40,2);
        ERROR_MSG("The meaning of life is: "<<res<<"!");

    SubFunc sub = reinterpret_cast<SubFunc>(executionEngine->getFunctionAddress("nv_sub"));
    if(!sub) {
        ERROR_MSG("Cannot retrieve Sub function.");
    else {
        int res = sub(50,8);
        ERROR_MSG("The meaning of life is really: "<<res<<"!");

    DEBUG_MSG("leaving runClang() function.");
    // return reinterpret_cast<Function>(executionEngine->getFunctionAddress(function));

I didn't change much at the beginning of that file, but then I had to replace a few unique_ptrs with the LLVM provided IntrusiveRefCntPtr containers (this was required as the initial code was not compiling)

And I added some additional debug outputs trying to call the functions that were defined in the provided C++ source as argument (ie. in that simple test I'm simply expecting to find the nv_add and nv_sub functions)

One thing that was missing from the original article from Matthieu Brucher were the compilation configuration files around that kind of shared module. For my part I use cmake in my project, and here is was I came up with so far:

At the root of this nvLLVM module I have the following cmakelist.txt file:




# We should try to find the LLVM package:
# message(STATUS "Using LLVMConfig.cmake in: ${LLVM_DIR}")

# message(STATUS "LLVM includes: ${LLVM_INCLUDE_DIRS}")

# message(STATUS "Using LLVM definitions: ${LLVM_DEFINITIONS}")

# This is needed to ensure we use the same C runtime as the LLVM components:

# Note: used llvm-config.exe --libs to retrieve the list of libraries below:
SET(LLVM_LIBS LLVMXRay LLVMWindowsManifest LLVMTableGen LLVMSymbolize LLVMDebugInfoPDB LLVMOrcJIT LLVMOrcError LLVMJITLink LLVMObjectYAML LLVMMCA LLVMLTO LLVMPasses LLVMCoroutines LLVMObjCARCOpts LLVMLineEditor LLVMLibDriver LLVMInterpreter LLVMFuzzMutate LLVMMCJIT LLVMExecutionEngine LLVMRuntimeDyld LLVMDWARFLinker LLVMDlltoolDriver LLVMOption LLVMDebugInfoGSYM LLVMCoverage LLVMXCoreDisassembler LLVMXCoreCodeGen LLVMXCoreDesc LLVMXCoreInfo LLVMX86Disassembler LLVMX86AsmParser LLVMX86CodeGen LLVMX86Desc LLVMX86Utils LLVMX86Info LLVMWebAssemblyDisassembler LLVMWebAssemblyCodeGen LLVMWebAssemblyDesc LLVMWebAssemblyAsmParser LLVMWebAssemblyInfo LLVMSystemZDisassembler LLVMSystemZCodeGen LLVMSystemZAsmParser LLVMSystemZDesc LLVMSystemZInfo LLVMSparcDisassembler LLVMSparcCodeGen LLVMSparcAsmParser LLVMSparcDesc LLVMSparcInfo LLVMRISCVDisassembler LLVMRISCVCodeGen LLVMRISCVAsmParser LLVMRISCVDesc LLVMRISCVUtils LLVMRISCVInfo LLVMPowerPCDisassembler LLVMPowerPCCodeGen LLVMPowerPCAsmParser LLVMPowerPCDesc LLVMPowerPCInfo LLVMNVPTXCodeGen LLVMNVPTXDesc LLVMNVPTXInfo LLVMMSP430Disassembler LLVMMSP430CodeGen LLVMMSP430AsmParser LLVMMSP430Desc LLVMMSP430Info LLVMMipsDisassembler LLVMMipsCodeGen LLVMMipsAsmParser LLVMMipsDesc LLVMMipsInfo LLVMLanaiDisassembler LLVMLanaiCodeGen LLVMLanaiAsmParser LLVMLanaiDesc LLVMLanaiInfo LLVMHexagonDisassembler LLVMHexagonCodeGen LLVMHexagonAsmParser LLVMHexagonDesc LLVMHexagonInfo LLVMBPFDisassembler LLVMBPFCodeGen LLVMBPFAsmParser LLVMBPFDesc LLVMBPFInfo LLVMAVRDisassembler LLVMAVRCodeGen LLVMAVRAsmParser LLVMAVRDesc LLVMAVRInfo LLVMARMDisassembler LLVMARMCodeGen LLVMARMAsmParser LLVMARMDesc LLVMARMUtils LLVMARMInfo LLVMAMDGPUDisassembler LLVMAMDGPUCodeGen LLVMMIRParser LLVMipo LLVMInstrumentation LLVMVectorize LLVMLinker LLVMIRReader LLVMAsmParser LLVMFrontendOpenMP LLVMAMDGPUAsmParser LLVMAMDGPUDesc LLVMAMDGPUUtils LLVMAMDGPUInfo LLVMAArch64Disassembler LLVMMCDisassembler LLVMAArch64CodeGen LLVMCFGuard LLVMGlobalISel LLVMSelectionDAG LLVMAsmPrinter LLVMDebugInfoDWARF LLVMCodeGen LLVMTarget LLVMScalarOpts LLVMInstCombine LLVMAggressiveInstCombine LLVMTransformUtils LLVMBitWriter LLVMAnalysis LLVMProfileData LLVMObject LLVMTextAPI LLVMBitReader LLVMCore LLVMRemarks LLVMBitstreamReader LLVMAArch64AsmParser LLVMMCParser LLVMAArch64Desc LLVMMC LLVMDebugInfoCodeView LLVMDebugInfoMSF LLVMBinaryFormat LLVMAArch64Utils LLVMAArch64Info LLVMSupport LLVMDemangle)

    # LLVMJITLink LLVMExecutionEngine LLVM-C * 
    # LLVMSupport LLVMJITLink 
SET(CLANG_LIBS clangAST clangBasic clangLex clangCodeGen clangFrontend clangEdit 
    clangSerialization clangSema clangDriver clangParse clangAnalysis)


# llvm_map_components_to_libnames(LLVM_LIBS support core clang)
# message(STATUS "Using LLVM libs: ${LLVM_LIBS}")







And then I have a src folder where I put the .cpp files and the following cmake file:






As you can see above, I made a few tests in the cmake files before I could figure out how to build my library properly ;-)

First thing to mention here was that the LLVM libraries are static and are using the static C runtime, while most of my other modules are using the dynamic C runtime, so I had to build a shared module only here, and specify the CMAKE_CXX_FLAGS value “/MT”
I also spent quit a lot of time trying to figure out what LLVM and cland libraries I should link to exactly. And at first I was linking to the LLVM-C.lib file, but that was a bad idea because as a result of this, I got an error when trying to create my ExecutionEngine with the call to auto executionEngine = builder.create(); stating that the JIT has not been linked in… Instead what you really need to do is to link to all the LLVM libraries that you get as an output of the call to llvm-config –libs [as it is the case in the cmake file above], and note that that list doesn't include the LLVM-C library

⇒ With the cmake files and source files above I could successfully generate my nvLLVM.dll module :-)! It's a giant 49MB file, but then it doesn't depend on any additional LLVM library (like LLVM-C.dll) and I can successfully use it in a simple test app with a call to the test runClang() function I defined here! So that module seems to contain a full and working C++ compiler on its own which is absolutely unbelievable from my perspective!

The mininal test app I used here was simply:

#include <iostream>

#define DEBUG_MSG(msg) std::cout << msg << std::endl;

#include <llvm_common.h>

int main(int argc, char *argv[])
	DEBUG_MSG("Running clang compilation...");
	DEBUG_MSG("Done running clang compilation.");

	return 0;

With cmake file:










And with that I can the following outputs:

//  (... lots of LLVM statistics here since they are enabled in my code above...)
                          ... Statistics Collected ...

2 file-search - Number of directory cache misses.
2 file-search - Number of directory lookups.
1 file-search - Number of file cache misses.
1 file-search - Number of file lookups.

[DEBUG]: Using target triple: x86_64-pc-windows-msvc
[DEBUG]: Retrieving nv_add/nv_sub functions...
[ERROR]: The meaning of life is: 42!
[ERROR]: The meaning of life is really: 42!
[DEBUG]: leaving runClang() function.

And of course the concent of the test1.cxx I provided above is simply [as one should expect]:

int nv_add(int a, int b)
    return a+b;

int nv_sub(int a, int b)
    return a-b;

So… as far as I understand, those results would mean that the LLVM compiler successfuly compiled the code from that test1 file, optimized that code, and then loaded the code into the LLVM context, so that we could use it directly as we just did, retrieving the function pointers and calling those functions! Isn't that all amazing ??!! :-)

Now that I have an initial working JIT compiler base, there are quite a few additional investigations/tests to perform in that direction:

  • I found this offical article Building a JIT: Starting out with KaleidoscopeJIT whic sounds very promising and versatile, so I should definitely have a deeper look at this and give it a try if possible.
  • Also I should try linking to my existing C++ modules to see if everything is working as expected.
  • I have also noticed that we can provide inputs “from memory” instead of “from file” [I think ?]: that would also be definitely good to have!
  • And I should keep in mind that my final goal is to be able to generate C++ code from lua, so I would really need some cleaning, and refactoring of the test code above to make it more “production ready” and then generate the required bindings of course ;-)

But that's all for today anyway! All those remaining points will be for another time ;-)!

16/04/2020 Update: If you found this post interesting or helpful, then you might want to read the following article I wrote on this topic, which is available here: JIT C++ compiler with LLVM - Part 2

    • That one seems to try to handle things from an even higher level, just calling the “main function” that you would typically find in the clang executable itself if I understand correctly.
    • This seems a bit too high for my own tastes, but it also mention the concept of “injecting the compiled module” into a “JIT” object, and it also provide a compagnon github project to build a jit from scratch that I should check eventually JitFromScratch example project on GitHub
  • blog/2020/0410_dynamic_cpp_compilation.txt
  • Last modified: 2021/09/02 13:38
  • by manu