blog:2022:0722_nervland_refreshing_memory

NervLand: Refreshing memory

Hey hey, since a moment now I've been thinking I should really start building some kind of virtual 3D environemnt “to do stuff”. Note that I'm not calling this a “game” because I would not want to create yet another game people could play for a moment before they move to something else. No, I would like to bring something new on the table, something everyone will want to use for a verrryyy long time. lol.

So today, we will just start with refreshing a bit my memory on where I was previously with my NervLand project and check if there is an appropriate path to start playing with vulkan in a not so distant future. Ohh and by the way, UnityEngine or UnrealEngine are out of the question here: I want to understand and control every step of what I'm going to buid, so these will not work for me in this project.

  • So, I had this NervLand project which I was working on before I started my journey with the NervProj framework… I almost forgot about it lol, but in fact, this is really the same idea that what I have in mind here! Which means I could/should reuse that project.
  • ⇒ Let's restart the repository from scratches, there as actually not much there anyway. And we can restart this project with NervProj control:
  • Cloning the repository:
    $ nvp git clone -p nvl NervLand
  • I feel I should init the repository with some base files such as .gitignore, .editorconfig, etc. But I'm not sure I should add a python environment in this project 🤔? So maybe I should improve a bit on the admin init method.
  • First turning the admin component into a dynamic component: OK
  • Next, I should really move the template files used in admin.py *out* of the python file itself: because this is getting too big:
  • Now I need to init my new repo only with some of those common files, or… 🤔 actually, maybe I should plan for a python local environment from the beginning ?… NNaaa… I should not need that I think. So I keep only:
    • .gitignore
    • .gitattributes
    • .editorconfig
    • nvp_config.json
  • ⇒ In fact, that's already the default settings I should get if I do not specify the --with-py-env argument. So I simply use:
    $ nvp admin init nvl
    2022/07/10 20:00:44 [nvp.core.admin] INFO: No change in C:\Users\kenshin\AppData\Roaming\Code\User\settings.json
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Wrtting updated vscode settings in D:\Projects\NervLand\.vscode\settings.template.json
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Copyging VSCode settings template to D:\Projects\NervLand\.vscode\settings.json
    
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Writting python env file D:\Projects\NervLand\.vs_env
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Writting editor config file D:\Projects\NervLand\.editorconfig
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Writting .gitignore file D:\Projects\NervLand\.gitignore
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Writting .gitattributes file D:\Projects\NervLand\.gitattributes
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Writting nvp_config.json file D:\Projects\NervLand\nvp_config.json
    2022/07/10 20:00:44 [nvp.core.admin] INFO: Adding pull section in git config.
  • OK, so far so good.
  • Next we need to setup our cmake building environment. In the NervProj framework I created an initial ModuleManager to handle that.
  • First let's turn that component into a dynamic one, and in the process, rename this to CMakeManager: OK
  • Then I added this simple cmake module definition in the NervLand nvp_config.json file :
    {
      "cmake_modules": {
        // NervLand main project:
        "nervland": {
          "url": "${PROJECT_ROOT_DIR}/nervland",
          "dependencies": {
            "BOOST_DIR": "boost"
          }
        }
      }
    }
    
  • But in fact I'm not quite sure I really need to use a subfolder here: let's start from the root folder instead, and we can always relocate later if needed. OK
  • What I would really like now would be to have a simple command to setup my module boilerplate code, like:
    nvp cmake project init NervLand
  • Ooppss: just added the command, but also just realized that I need the builder component to be dynamic too to load the CmakeManager correctly: fixing that.
  • OK, now it's “getting ready” to work:
    $ nvp cmake project init nervland
    2022/07/11 07:58:35 [nvp.nvp_compiler] INFO: MSVC root dir is: D:\Softs\VisualStudio2022CE
    2022/07/11 07:58:35 [nvp.nvp_compiler] INFO: Found msvc-14.31.31103
    2022/07/11 07:58:35 [nvp.core.build_manager] INFO: Selecting compiler msvc-14.31.31103
    2022/07/11 07:58:35 [nvp.nvp_compiler] INFO: Initializing MSVC compiler environment...
    2022/07/11 07:58:37 [nvp.core.cmake_manager] INFO: Should init cmake project here: {'url': 'D:/Projects/NervLand', 'dependencies': {'BOOST_DIR': 'boost'}}
  • First thing we need is to setup a main CmakeLists.txt file for the project:
            proj_dir = cproj['url']
            dest_file = self.get_path(proj_dir, "CMakeLists.txt")
            template_dir = self.get_path(self.ctx.get_root_dir(), "assets", "templates")
    
            if not self.file_exists(dest_file):
                logger.info("Writting main CMakeLists.txt file %s", dest_file)
                content = self.read_text_file(template_dir, "main_cmakelists.txt.tpl")
                content = content.replace("${PROJ_NAME}", cproj['name'])
                content = content.replace("${PROJ_VERSION}", cproj['version'])
                content = content.replace("${PROJ_PREFIX}", cproj['prefix'].upper())
    
                self.write_text_file(content, dest_file)
  • And we also need to create some initial content for the sources and tests folders (ie. empty CMakeLists.txt files):
            # Create the source/tests folder:
            src_dir = self.get_path(proj_dir, "sources")
            self.make_folder(src_dir)
    
            dest_file = self.get_path(src_dir, "CMakeLists.txt")
            if not self.file_exists(dest_file):
                logger.info("Writting file %s", dest_file)
                content = f"# CMake modules for {proj_name}\n"
                self.write_text_file(content, dest_file)
    
            # Create the test folder:
            test_dir = self.get_path(proj_dir, "tests")
            self.make_folder(test_dir)
    
            dest_file = self.get_path(test_dir, "CMakeLists.txt")
            if not self.file_exists(dest_file):
                logger.info("Writting file %s", dest_file)
                content = f"# Cmake tests for {proj_name} modules\n"
                self.write_text_file(content, dest_file)
  • Yeah, so it seems something went wrong while collecting the blocks on CELO, and I now get a duplicate key issue:
    2022/07/14 11:22:04 [nvp.nvp_object] ERROR: Subprocess terminated with error code 1 (cmd=['/mnt/data1/dev/projects/NervProj/.pyenvs/bsc_env/bin/python3', '/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/blockchain_manager.py', 'collect', 'blocks', '-c', 'celo'])
    2022/07/14 11:22:04 [nvp.components.runner] ERROR: Error occured in script command:
    cmd=['/mnt/data1/dev/projects/NervProj/.pyenvs/bsc_env/bin/python3', '/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/blockchain_manager.py', 'collect', 'blocks', '-c', 'celo']
    cwd=None
    return code=1
    lastest outputs:
    chain.handle("collect_evm_blocks")
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_component.py", line 105, in handle
    return self.call_handler(f"{self.handlers_path}.{hname}", self, *args, **kwargs)
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_component.py", line 100, in call_handler
    return self.ctx.call_handler(hname, *args, **kwargs)
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_context.py", line 676, in call_handler
    return handler(*args, **kwargs)
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/handlers/collect_evm_blocks.py", line 47, in handle
    ntx += process_block(cdb, tdb, last_block, "100.00%: ")
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/handlers/collect_evm_blocks.py", line 74, in process_block
    cdb.insert_blocks([desc])
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/chain_db.py", line 270, in insert_blocks
    self.execute(SQL_INSERT_BLOCK, rows, many=True, commit=True)
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/chain_db.py", line 191, in execute
    return self.sql_db.execute(*args, **kaargs)
    File "/mnt/data1/dev/projects/NervHome/nvh/core/postgresql_db.py", line 60, in execute
    c.executemany(code, data)
    psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "blocks_pkey"
    DETAIL:  Key (number)=(14035044) already exists.
  • Let's see if I handle that robustly… ⇒ Just found the “UPSERT” command in postgresql, that's interesting ;-), so updating the insert SQL command for the blocks as follow:
    SQL_INSERT_BLOCK = ''' INSERT INTO
        blocks(number,timestamp,miner_id,size,difficulty,tx_count,gas_used)
        VALUES(%s,%s,%s,%s,%s,%s,%s) ON CONFLICT (number) DO NOTHING; '''
  • Well, that is still not quite working… and I really don't feel like spending my day on this, so for now, let's just reset our databases again instead. I'm building a dedicated handler for that actually:
    """Drop all the tables for the blockchain transactions"""
    
    import logging
    
    # from nvh.crypto.blockchain.evm_blockchain import EVMBlockchain
    from nvp.nvp_context import NVPContext
    
    logger = logging.getLogger("clear_all_tx_databases")
    
    
    def handle(_):
        """Handler function entry point"""
    
        ctx = NVPContext.get()
        cnames = ["bsc", "eth", "celo", "avax", "aurora"]
        chains = {name: ctx.get_component(f"{name}_chain") for name in cnames}
        dbs = {name: chains[name].get_db() for name in cnames}
    
        tnames = ["blocks", "transactions"]
    
        for cname, cdb in dbs.items():
            for tname in tnames:
                logger.info("Dropping %s on %s...", tname, cname)
                cdb.execute(f"DROP TABLE IF EXISTS {tname};", commit=True)
    
        tdb = chains["bsc"].get_tx_db()
        tnames = ["txdata", "swap_tokens_op", "swap_native_op", "transfer_op"]
    
        for cname in cnames:
            for tname in tnames:
                table_name = f"{cname}_{tname}"
                logger.info("Dropping %s...", table_name)
                tdb.execute(f"DROP TABLE IF EXISTS {table_name};", commit=True)
    
  • Executing this handler is done with the following command:
    $ nvp bchain drop-all-tx-tables
    2022/07/14 11:09:04 [clear_all_tx_databases] INFO: Dropping blocks on bsc...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping transactions on bsc...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping blocks on eth...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping transactions on eth...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping blocks on celo...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping transactions on celo...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping blocks on avax...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping transactions on avax...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping blocks on aurora...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping transactions on aurora...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping bsc_txdata...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping bsc_swap_tokens_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping bsc_swap_native_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping bsc_transfer_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping eth_txdata...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping eth_swap_tokens_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping eth_swap_native_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping eth_transfer_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping celo_txdata...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping celo_swap_tokens_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping celo_swap_native_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping celo_transfer_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping avax_txdata...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping avax_swap_tokens_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping avax_swap_native_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping avax_transfer_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping aurora_txdata...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping aurora_swap_tokens_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping aurora_swap_native_op...
    2022/07/14 11:09:05 [clear_all_tx_databases] INFO: Dropping aurora_transfer_op...
  • Also in the process added a command/handler to check the size fo the databases:
    """Check the size of the databases"""
    
    import logging
    
    from nvp.nvp_context import NVPContext
    
    logger = logging.getLogger("check_db_sizes")
    
    
    def handle(_):
        """Handler function entry point"""
    
        ctx = NVPContext.get()
        cnames = ["bsc", "eth", "celo", "avax", "aurora"]
        chains = {name: ctx.get_component(f"{name}_chain") for name in cnames}
    
        # Check database size:
        for cname, chain in chains.items():
            tname = chain.get_config()["chain_db_name"]
            sql = f"SELECT pg_size_pretty( pg_database_size('{tname}') );"
            cdb = chain.get_db()
            size = cdb.execute(sql).fetchone()[0]
            logger.info("%s database size: %s", cname, size)
            sql = "SELECT pg_size_pretty( pg_total_relation_size('blocks') );"
            size = cdb.execute(sql).fetchone()[0]
            logger.info("%s blocks table size: %s", cname, size)
    
        tname = "transactions_db"
        chain = chains["bsc"]
        tdb = chain.get_tx_db()
    
        sql = f"SELECT pg_size_pretty( pg_database_size('{tname}') );"
        size = tdb.execute(sql).fetchone()[0]
        logger.info("%s database size: %s", tname, size)
        for cname in cnames:
            sql = f"SELECT pg_size_pretty( pg_total_relation_size('{cname}_txdata') );"
            size = tdb.execute(sql).fetchone()[0]
            logger.info("%s txdata table size: %s", cname, size)
    
  • Works with the command:
    $ nvp bchain check-db-sizes
    2022/07/14 11:05:05 [check_db_sizes] INFO: bsc database size: 1850 MB
    2022/07/14 11:05:05 [check_db_sizes] INFO: bsc blocks table size: 25 MB
    2022/07/14 11:05:05 [check_db_sizes] INFO: eth database size: 782 MB
    2022/07/14 11:05:05 [check_db_sizes] INFO: eth blocks table size: 5800 kB
    2022/07/14 11:05:06 [check_db_sizes] INFO: celo database size: 36 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: celo blocks table size: 14 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: avax database size: 97 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: avax blocks table size: 37 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: aurora database size: 55 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: aurora blocks table size: 43 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: transactions_db database size: 14 GB
    2022/07/14 11:05:06 [check_db_sizes] INFO: bsc txdata table size: 8043 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: eth txdata table size: 3889 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: celo txdata table size: 720 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: avax txdata table size: 649 MB
    2022/07/14 11:05:06 [check_db_sizes] INFO: aurora txdata table size: 248 MB
  • While handling the issue above, I noticed that I could not directly execute an handler from the BlockchainManager class itself:
            if cmd == "check-db-sizes":
                self.handle("check_db_sizes")
                return True
  • This will produce this error:
    File "D:\Projects\NervProj\nvp\nvp_component.py", line 93, in run
    res = self.process_command(cmd)
    File "D:\Projects\NervProj\nvp\nvp_component.py", line 83, in process_command
    return self.process_cmd_path(self.ctx.get_command_path())
    File "D:\Projects\NervHome\nvh\crypto\blockchain\blockchain_manager.py", line 349, in process_cmd_path
    self.handle("check_db_sizes")
    File "D:\Projects\NervProj\nvp\nvp_component.py", line 105, in handle
    return self.call_handler(f"{self.handlers_path}.{hname}", self, *args, **kwargs)
    File "D:\Projects\NervProj\nvp\nvp_component.py", line 100, in call_handler
    return self.ctx.call_handler(hname, *args, **kwargs)
    File "D:\Projects\NervProj\nvp\nvp_context.py", line 675, in call_handler
    handler = self.get_handler(hname)
    File "D:\Projects\NervProj\nvp\nvp_context.py", line 647, in get_handler
    filepath = self.resolve_module_file(hname)
    File "D:\Projects\NervProj\nvp\nvp_context.py", line 640, in resolve_module_file
    self.throw("Cannot resolve file for module %s", hname)
    File "D:\Projects\NervProj\nvp\nvp_object.py", line 77, in throw
    raise NVPCheckError(fmt % args)
    nvp.nvp_object.NVPCheckError: Cannot resolve file for module None.check_db_sizes
  • ⇒ checking if I can fix that: OK, I see: this is because I'm creating the BlockchainManager instance “manually” in the main handler:
    if __name__ == "__main__":
        # Create the context:
        context = NVPContext()
    
        # Add our component:
        comp = context.register_component("chain_man", BlockchainManager(context))
    
        psr = context.build_parser("find-relevant-sig")
        psr.add_str("-c", "--chain", dest="chain", default="bsc")("Chain of interest")
  • Instead I need it to be created by the context to setup the contruction frame correctly 🤔 hmmm. Arrff, never mind: I don't want to spend my day on that issue either, so let's keep it as is for now.
  • Next I should be able to add a new “module” in a given project, either from the project config or directly from the command line. Let's figure out what we need exactly for that.
  • Note: while working on this I updated the command to setup a cmake project to simply be:
    nvp cmake setup <proj_name>
  • ⇒ So I'm currently adding a few default files when setting up a new module, but now I really need to build this thing to check its working as expected.
  • OK: that was relatively easy 😄: there was already a “build_module()” method in the CmakeManager class, but in fact this is really to build a cmake project, so I just renamed to build_project, and now I can setup an intial version of a module in the nervland project, and then build the full project with simply:
    $ nvp cmake setup nervland
    $ nvp cmake build nervland
  • ⇒ I think what I want to do here is to build an high level definition of my modules directly in the config.json file, and then my cmake setup command could build most of the required files for me automatically: could be very interesting.
  • This is really starting to piss my off: still conflicting blocks on the CELO blockchain 🤬:
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_component.py", line 105, in handle
    return self.call_handler(f"{self.handlers_path}.{hname}", self, *args, **kwargs)
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_component.py", line 100, in call_handler
    return self.ctx.call_handler(hname, *args, **kwargs)
    File "/mnt/data1/dev/projects/NervProj/nvp/nvp_context.py", line 676, in call_handler
    return handler(*args, **kwargs)
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/handlers/collect_evm_blocks.py", line 47, in handle
    ntx += process_block(cdb, tdb, last_block, "100.00%: ")
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/handlers/collect_evm_blocks.py", line 74, in process_block
    cdb.insert_blocks([desc])
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/chain_db.py", line 270, in insert_blocks
    self.execute(SQL_INSERT_BLOCK, rows, many=True, commit=True)
    File "/mnt/data1/dev/projects/NervHome/nvh/crypto/blockchain/chain_db.py", line 191, in execute
    return self.sql_db.execute(*args, **kaargs)
    File "/mnt/data1/dev/projects/NervHome/nvh/core/postgresql_db.py", line 60, in execute
    c.executemany(code, data)
    psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "blocks_pkey"
    DETAIL:  Key (number)=(14035044) already exists.
  • So now let's really check if the block is not already present before adding it. Added this method in ChainDB:
        def has_block(self, bnum):
            """Check if a given block is already in the db"""
            sql = f"select exists(select 1 from blocks where number={bnum})"
            cur = self.execute(sql)
            return cur.fetchone()[0]
  • ⇒ I really hope this error will stop here now.
  • Before going further with our C++ code, I want to setup the c++ linter/formater properly in VisualStudioCode.
  • So first, let's retrieve our clang compiler: I don't remember how to do this of course 😂, let's see…
  • Actually I'm wondering how I'm building clang from sources exactly ? 🤔 Let's try to redo that with the latest version (which is 14.0.6 right now)
  • I think the build command should just be:
    $ nvp build libs LLVM
    2022/07/14 12:21:34 [nvp.nvp_compiler] INFO: MSVC root dir is: D:\Softs\VisualStudio2022CE
    2022/07/14 12:21:34 [nvp.nvp_compiler] INFO: Found msvc-14.32.31326
    2022/07/14 12:21:34 [nvp.core.build_manager] INFO: Selecting compiler msvc-14.32.31326
    2022/07/14 12:21:34 [nvp.core.build_manager] INFO: List of settings: {'verbose': False, 'l0_cmd': 'libs', 'lib_names': 'LLVM', 'compiler_type': None, 'rebuild': False, 'preview': False, 'keep_build': False, 'append': False}
    2022/07/14 12:21:34 [nvp.core.build_manager] INFO: All libraries OK.
  • Hmmm, that's interesting: not quite what I expected… Ahh, we need lower case, so:
    $ nvp build libs llvm
  • Except I didn't get any output because of my output catching mechanism in execute() function, so I had to update that part too, here is a new version using read(1) instead of readline() and handling the \r character correctly:
            # cf. https://stackoverflow.com/questions/31833897/
            # python-read-from-subprocess-stdout-and-stderr-separately-while-preserving-order
            def reader(pipe, queue, id):
                """Reader function for a stream"""
                try:
                    with pipe:
                        # Note: need to read the char one by one here, until we find a \r or \n value:
                        buf = b''
    
                        def readop():
                            return pipe.read(1)
    
                        for char in iter(readop, b''):
                            if char != b'\r':
                                buf += char
    
                            if char == b'\r' or char == b'\n':
                                queue.put((id, buf))
                                buf = b''
    
                            # Add the carriage return on the new line:
                            if char == b'\r':
                                buf += char
    
                        # for line in iter(pipe.readline, b''):
                        #     queue.put((id, line))
                finally:
                    queue.put(None)
    
  • Okay, so we got LLVM 14.0.6 built successfully as a library. But now I rather need to use it as a “tool” no ? Hmm, this is a bit tricky in fact…
  • So what I think I should do is to handle that in the CMakeManager directly: when setting up a cmake project, I should check what is the parent nvp project for that cmake project, and then install the appropriate VSCode settings in that nvp project, makes sense right ? And before doing so, we will check that the LLVM library is installed and so we should be good.
  • Note: I need the clang-format extension in vscode for this (I think)
  • ⇒ In fact also installed the C/C++ Extension Pack too (@id:ms-vscode.cpptools-extension-pack)
  • Okay, so in the end I should not install the clang-format extension and just use the support for clang-format in the C/C++ extension pack 👍! Current settings.json looks like this:
    {
      "python.envFile": "${workspaceFolder}/.vs_env",
      "editor.formatOnSave": true,
      "C_Cpp.clang_format_path": "D:/Projects/NervProj/libraries/windows_msvc/LLVM-14.0.6/bin/clang-format.exe"
    }
  • Note: in parallel, running the compilation of LLVM 14.0.6 on linux:
    $ nvp build libs llvm
    • But of course, this is failing 😒:
      Loading profile for neptune...
      =ON', '-DLLVM_BUILD_TOOLS=ON', '-DLLVM_ENABLE_PROJECTS=clang;clang-tools-extra;libclc;lld;lldb;polly;pstl', '-DLLVM_STATIC_LINK_CXX_STDLIB=OFF', '-DLLVM_INCLUDE_TOOLS=ON', '-DLLVM_ENABLE_PER_TARGET_RUNTIME_DIR=OFF', '-DLLVM_ENABLE_LIBXML2=ON', '-DZLIB_LIBRARY=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/zlib-1.2.12/lib/libz.a', '-DZLIB_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/zlib-1.2.12/include', '-DLIBXML2_LIBRARY=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libxml2-2.9.13/lib/libxml2.a', '-DLIBXML2_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libxml2-2.9.13/include/libxml2', '-DLIBCXX_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLIBCXX_INSTALL_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBCXX_INSTALL_INCLUDE_TARGET_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBCXXABI_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLIBUNWIND_INSTALL_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBUNWIND_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLLVM_ENABLE_RUNTIMES=libc;libcxx;libcxxabi;libunwind;openmp', '../llvm'])
      ninja: error: loading 'build.ninja': No such file or directory
      2022/07/15 10:04:04 [nvp.nvp_object] ERROR: Subprocess terminated with error code 1 (cmd=['/mnt/data1/dev/projects/NervProj/tools/linux/ninja-1.10.2/ninja'])
      ninja: error: loading 'build.ninja': No such file or directory
      2022/07/15 10:04:04 [nvp.nvp_object] ERROR: Subprocess terminated with error code 1 (cmd=['/mnt/data1/dev/projects/NervProj/tools/linux/ninja-1.10.2/ninja', 'install'])
      2022/07/15 10:04:04 [nvp.core.tools] WARNING: Cannot create package: invalid source path: /mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6
  • Come on… give me a break 😖. Anyway, now trying with retrieval of outputs:
    $ nvp build libs llvm -k 2>&1 | tee llvm_build.log
  • Hmmm, okay… cmake tells me that my compiler is broken ?:
    2022/07/15 11:59:43 [nvp.nvp_builder] INFO: Cmake command: ['/mnt/data1/dev/projects/NervProj/tools/linux/cmake-3.22.3/bin/cmake', '-G', 'Ninja', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6', '-DLLVM_TARGETS_TO_BUILD=X86', '-DLLVM_ENABLE_EH=ON', '-DLLVM_ENABLE_RTTI=ON', '-DLLVM_BUILD_TOOLS=ON', '-DLLVM_ENABLE_PROJECTS=clang;clang-tools-extra;libclc;lld;lldb;polly;pstl', '-DLLVM_STATIC_LINK_CXX_STDLIB=OFF', '-DLLVM_INCLUDE_TOOLS=ON', '-DLLVM_ENABLE_PER_TARGET_RUNTIME_DIR=OFF', '-DLLVM_ENABLE_LIBXML2=ON', '-DZLIB_LIBRARY=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/zlib-1.2.12/lib/libz.a', '-DZLIB_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/zlib-1.2.12/include', '-DLIBXML2_LIBRARY=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libxml2-2.9.13/lib/libxml2.a', '-DLIBXML2_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libxml2-2.9.13/include/libxml2', '-DLIBCXX_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLIBCXX_INSTALL_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBCXX_INSTALL_INCLUDE_TARGET_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBCXXABI_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLIBUNWIND_INSTALL_INCLUDE_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/include/c++/v1', '-DLIBUNWIND_INSTALL_LIBRARY_DIR=/mnt/data1/dev/projects/NervProj/libraries/linux_clang/LLVM-14.0.6/lib', '-DLLVM_ENABLE_RUNTIMES=libc;libcxx;libcxxabi;libunwind;openmp', '../llvm']
    -- The C compiler identification is Clang 13.0.1
    -- The CXX compiler identification is Clang 13.0.1
    -- The ASM compiler identification is Clang with GNU-like command-line
    -- Found assembler: /mnt/data1/dev/projects/NervProj/tools/linux/clang-13.0.1/bin/clang
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - failed
    -- Check for working C compiler: /mnt/data1/dev/projects/NervProj/tools/linux/clang-13.0.1/bin/clang
    -- Check for working C compiler: /mnt/data1/dev/projects/NervProj/tools/linux/clang-13.0.1/bin/clang - broken
    CMake Error at /mnt/data1/dev/projects/NervProj/tools/linux/cmake-3.22.3/share/cmake-3.22/Modules/CMakeTestCCompiler.cmake:69 (message):
      The C compiler
    
        "/mnt/data1/dev/projects/NervProj/tools/linux/clang-13.0.1/bin/clang"
    
      is not able to compile a simple test program.
    
      It fails with the following output:
    
    -- Configuring incomplete, errors occurred!
    See also "/mnt/data1/dev/projects/NervProj/libraries/build/LLVM-14.0.6/build/CMakeFiles/CMakeOutput.log".
        Change Dir: /mnt/data1/dev/projects/NervProj/libraries/build/LLVM-14.0.6/build/CMakeFiles/CMakeTmp
        
    See also "/mnt/data1/dev/projects/NervProj/libraries/build/LLVM-14.0.6/build/CMakeFiles/CMakeError.log".
        Run Build Command(s):/mnt/data1/dev/projects/NervProj/tools/linux/ninja-1.10.2/ninja cmTC_588f0 && [1/2] Building C object CMakeFiles/cmTC_588f0.dir/testCCompiler.c.o
        [2/2] Linking C executable cmTC_588f0
        FAILED: cmTC_588f0 
        : && /mnt/data1/dev/projects/NervProj/tools/linux/clang-13.0.1/bin/clang -fPIC -DLIBXML_STATIC -I/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libiconv-1.16/include -I/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libxml2-2.9.13/include/libxml2 -L/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libiconv-1.16/lib -llibiconv.a CMakeFiles/cmTC_588f0.dir/testCCompiler.c.o -o cmTC_588f0   && :
        /usr/bin/ld: cannot find -llibiconv.a
        clang: error: linker command failed with exit code 1 (use -v to see invocation)
        ninja: build stopped: subcommand failed.
        
        
    
      
    
      CMake will not be able to correctly generate this project.
    Call Stack (most recent call first):
      CMakeLists.txt:49 (project)
  • Checking the CMakeError.log file this seems to be related to how we specify the library for “libiconv” to that compiler:
     -L/mnt/data1/dev/projects/NervProj/libraries/linux_clang/libiconv-1.16/lib -llibiconv.a
  • ⇒ We should rather have -liconv.a above I think… ⇒ Fixing those lines in the LLVM build script:
            # Note: we also need to add libiconv to the include/link flags:
            iconv_dir = self.man.get_library_root_dir("libiconv").replace("\\", "/")
            iconv_lib = "libiconvStatic.lib" if self.is_windows else "iconv"
  • Arrgg, crap: this does build/compile but it's linking to the shared version of the iconv library, so I then have an additional dependency on it that I really want to avoid:
    /mnt/data1/dev/projects/NervProj/libraries/build/LLVM-14.0.6/build/bin/llvm-tblgen: error while loading shared libraries: libiconv.so.2: cannot open shared object file: No such file or directory
  • Which means I need to rebuild again and this time ensure I'm linking to the static version somehow.
  • So I tried exevything I could with no success so far… But I have not said my last word yet: let's check if I can patch that LLDB program to accept my ICONV static library…
    • ⇒ OK, I think this means I should patch the compilation of the c-index-test tool and LLDB actually: both are accessing the LibXml2::LibXml2 dependency.
There is also the WindowsManifest project which is using the LibXml2::LibXml2 library but I dont expect that one to be built on linux, so no need to patch it.
  • Here are the additional patches I'm adding to handle this:
            if self.is_linux:
                file = self.get_path(build_dir, "lldb", "source", "Host", "CMakeLists.txt")
                self.replace_in_file(file,
                                     "list(APPEND EXTRA_LIBS LibXml2::LibXml2)",
                                     "list(APPEND EXTRA_LIBS LibXml2::LibXml2 ${LIBICONV_LIBRARY})")
    
                file = self.get_path(build_dir, "clang", "tools", "c-index-test", "CMakeLists.txt")
                self.replace_in_file(file,
                                     "target_link_libraries(c-index-test PRIVATE LibXml2::LibXml2)",
                                     "target_link_libraries(c-index-test PRIVATE LibXml2::LibXml2 ${LIBICONV_LIBRARY})")
  • And thanks god this is finally building on linux!
  • What I don't like currently is that the formater will not add any space between the function parentheses and the curby brackets of the function body as in the code below:
    namespace nvl {
    class MyTest {
      public:
        void hello(){};
    };
    }
  • ⇒ This is annoying/disturbing lol, so searching how to configure that: found it! (by comparing with the WebKit Style: it's SpaceBeforeCpp11BracedList: true 👍)
  • So I tried about evreything I could think about to get this working with the C/C++ extension, but no luck so far:
      "C_Cpp.codeAnalysis.clangTidy.path": "D:/Projects/NervProj/libraries/windows_msvc/LLVM-14.0.6/bin/clang-tidy.exe",
      "C_Cpp.codeAnalysis.clangTidy.enabled": true,
      "C_Cpp.codeAnalysis.runAutomatically": true,
      "C_Cpp.codeAnalysis.clangTidy.checks.enabled": [
        // "-*",
        "google-*",
        "cppcoreguidelines-*",
        "clang-analyzer-*",
        "readability-*"
        // "-cppcoreguidelines-pro-bounds-constant-array-index"
      ],
      "C_Cpp.codeAnalysis.clangTidy.config": ".clang-tidy"
  • ⇒ So instead I installed the additional extension clang-tidy and this is working just fine with the config:
      "clang-tidy.executable": "D:/Projects/NervProj/libraries/windows_msvc/LLVM-14.0.6/bin/clang-tidy.exe",
      "clang-tidy.lintOnSave": true,
      "clang-tidy.checks": [
        "-*",
        "google-*",
        "cppcoreguidelines-*",
        "clang-*",
        "readability-*",
        "-cppcoreguidelines-pro-bounds-constant-array-index"
      ]
    • Except that this is only working when we have a single folder in the workspace 😥, too bad.
    • Maybe I need to upgrade my VSCode installation and try again with the C/C++ extension only ? ⇒ Nope, doesn't help.
An interesting list of checks categories is available at: https://clang.llvm.org/extra/clang-tidy/
  • ⇒ But now I have just discovered the clangd app/plugin: that one looks really promising, so let's give it a try.
  • Okay: that's indeed looking pretty good so far: only limitation in a multi folder workspace is that we need to update the settings in the workspace file itself as follow:
    {
      "folders": [
        {
          "path": "NervLand"
        },
        {
          "path": "NervProj"
        },
        {
          "path": "NervHome"
        }
      ],
      "settings": {
        "clangd.path": "D:/Projects/NervProj/libraries/windows_msvc/LLVM-14.0.6/bin/clangd.exe"
      }
    }
  • But then clangd seems to be working just fine out of the box! (Note: We have to disable intellisence from the Microsoft C/C++ extension to use clangd, but that's definitely acceptable I think 👍)
  • Now improving the clang-tidy config (clangd will also use our .clang-tidy file):
    ---
    Checks: "-*,boost-*,bugprone-*,clang-analyzer-*,concurrency-*,cppcoreguidelines-*,hicpp-*,misc-*,modernize-*,performance-*,portability-*,readability-*"
  • Also updating the CMakeManager to handle the clangd settings/files: OK
  • Additional note: In fact to get a correct config for clangd we also need to generate a compile_commands.json file with cmake itself: cf. https://clangd.llvm.org/installation
  • ⇒ I thus added this code in the build_project method:
            flags.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=1")
            builder.run_cmake(build_dir, install_dir, src_dir, flags)
    
            # Copy the compile_commands.json file:
            comp_file = self.get_path(build_dir, "compile_commands.json")
            self.check(self.file_exists(comp_file), "No file %s", comp_file)
            dst_file = self.get_path(src_dir, "compile_commands.json")
            self.rename_file(comp_file, dst_file)
    
            if gen_commands:
                # Don't actually run the build
                return
  • Arrff, then we have a problem because I generate those compile commands with MSVC as compiler and setup of the precompiled headers for that compiler 😢 ⇒ this will lead to the too many errors emitted stopping now issue on the included headers.
  • So should use so cmake code such as the following when generating the compile commands:
    # add the pch custom target as a dependency
    add_dependencies(corelib pch)
    
    # add the flag
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -include-pch ${CMAKE_CURRENT_BINARY_DIR}/stdinc.hpp.pch")
    
    # target
    add_custom_target(pch COMMAND clang -x c++-header ${CMAKE_CURRENT_SOURCE_DIR}/src/stdinc.hpp -o ${CMAKE_CURRENT_BINARY_DIR}/stdinc.hpp.pch)
  • One thing I noticed if I try to change the compiler to clang to generate the compile_commands.json file is that I also need to delete completely the project build folder to apply that change, otherwise cmake will keep its configuration to use MSVC ⇒ I should add a flag for the cmake setup command to force reconfiguration
  • Next, let's add our macro to setup precompiled headers for both MSVC and Clang… And I just found target_precompile_headers in cmake 😅 my my my. So let's see if I can use that directly… Yep! seems to be working just fine with the following config (and no need for a precomp.cpp file anymore 👍!):
    set(TARGET_NAME "nvlCore_static")
    
    add_definitions(-DNVL_LIB_STATIC)
    
    add_library(${TARGET_NAME} ${PUBLIC_HEADERS} ${SOURCE_FILES})
    
    target_link_libraries(${TARGET_NAME} ${FLAVOR_LIBS} ${BOOST_LIBS})
    target_precompile_headers(${TARGET_NAME} PRIVATE "../src/core_precomp.h")
When generating a PCH this way, we don't even need to include the precomp header file in our sources anymore as it will be added automatically for us on compilation.
  • Continuing with the CmakeManager functions, I now want to add support to create an header file in a given module directly from command line, like this:
    nvp cmake add header nervland Core core_macros.h
  • OK, this works fine. here is the main python function to handle this:
        def add_header_file(self, cproj, mod_name, file_name):
            """Add a new header file in a the given module"""
            proj_dir = cproj['root_dir']
    
            mdesc = self.get_module_desc(cproj, mod_name)
            self.check(mdesc is not None, "invalid cmake project module %s", mod_name)
    
            mod_dir = mod_name
            if mdesc.get("type", "library") == "library":
                mod_dir = f"{cproj['prefix']}{mod_name}"
    
            template_dir = self.get_path(self.ctx.get_root_dir(), "assets", "templates")
    
            dest_file = self.get_path(proj_dir, "modules", mod_dir, "src", file_name)
            bname = self.remove_file_extension(self.get_filename(file_name))
    
            tpl_file = self.get_path(template_dir, "header_file.h.tpl")
    
            hlocs = {
                "%PROJ_PREFIX_UPPER%": cproj['prefix'].upper(),
                "%HEADER_NAME_UPPER%": bname.upper()
            }
    
            self.write_project_file(hlocs, dest_file, tpl_file)
  • Next stop: adding a class from the command line 😎, I should be able to add a class with the command:
    nvp cmake add class nervland Core base/RefObject
  • When building this I realized I would also like to add multi line strings in my config file to provide per project custom default class contents, but that is not easy to do in JSON. So I'm thinking I should really add support to process YAML config files too now: let's add that: OK
  • OK, adding classes now supported on the command line with this handling code:
        def add_class_files(self, cproj, mod_name, class_name, ctype, rewrite):
            """Add a new class in a the given module"""
            proj_dir = cproj['root_dir']
            prefix = cproj['prefix']
            mdesc = self.get_module_desc(cproj, mod_name)
            self.check(mdesc is not None, "invalid cmake project module %s", mod_name)
    
            mod_dir = mod_name
            if mdesc.get("type", "library") == "library":
                mod_dir = f"{prefix}{mod_name}"
    
            template_dir = self.get_path(self.ctx.get_root_dir(), "assets", "templates")
    
            dest_file = self.get_path(proj_dir, "modules", mod_dir, "src", f"{class_name}.h")
    
            if rewrite and self.file_exists(dest_file):
                self.remove_file(dest_file)
    
            parent_dir = self.get_parent_folder(dest_file)
            self.make_folder(parent_dir)
    
            bname = self.remove_file_extension(self.get_filename(class_name))
    
            tpl_file = self.get_path(template_dir, "class_header.h.tpl")
    
            content_tpl = '''public:
        %CLASS_NAME%();
        virtual ~%CLASS_NAME%();'''
    
            if ctype is not None:
                ctpl_file = cproj["content_templates"][ctype]
                ctpl_file = self.get_path(proj_dir, "cmake", "templates", tpl_file)
                content_tpl = self.read_text_file(ctpl_file)
    
            # We just replace the content part in our global template:
            header_tpl = self.read_text_file(tpl_file)
            header_tpl = header_tpl.replace("%CLASS_CONTENT%", content_tpl)
    
            hlocs = {
                "%PROJ_PREFIX_UPPER%": prefix.upper(),
                "%CLASS_NAME_UPPER%": bname.upper(),
                "%BEGIN_NAMESPACE%": f"namespace {prefix} " + "{",
                "%END_NAMESPACE%": "}",
                "%NAMESPACE%": prefix,
                "%CLASS_NAME%": bname,
                "%CLASS_EXPORT%": f"{mod_dir.upper()}_EXPORT",
                "%CLASS_INCLUDE%": f"{class_name}.h"
            }
    
            self.write_project_file_content(hlocs, dest_file, header_tpl)
    
            dest_file = self.get_path(proj_dir, "modules", mod_dir, "src", f"{class_name}.cpp")
            if rewrite and self.file_exists(dest_file):
                self.remove_file(dest_file)
    
            tpl_file = self.get_path(template_dir, "class_impl.cpp.tpl")
            self.write_project_file(hlocs, dest_file, tpl_file)
    
            # generate the compile commands:
            self.build_project(cproj['name'].lower(), None, gen_commands=True)
  • Now it's (finally) time to start restoring some content in my nvCore module.
  • Starting with the SpinLock class:
    nvp cmake add header nervland Core base/SpinLock.h
  • On the convertion from macro to constexpr template functions: https://devblogs.microsoft.com/cppblog/convert-macros-to-constexpr/
  • Next adding the RefPtr template class:
  • Then trying to add the Allocator class:
    • ⇒ Need the LogManager class first.
  • And now I'm thinking I should use an external library like spdlog to support logging… building everything from scratches is fun, but I will not get very far if I keep it that way lol.
  • Added a simple builder for spdlog:
    """This module provide the builder for the spdlog library."""
    
    import logging
    
    from nvp.core.build_manager import BuildManager
    from nvp.nvp_builder import NVPBuilder
    
    logger = logging.getLogger(__name__)
    
    
    def register_builder(bman: BuildManager):
        """Register the build function"""
    
        bman.register_builder('spdlog', SpdLogBuilder(bman))
    
    
    class SpdLogBuilder(NVPBuilder):
        """spdlog builder class."""
    
        def build_on_windows(self, build_dir, prefix, _desc):
            """Build method for spdlog on windows"""
    
            flags = ["-S", ".", "-B", "build"]
            self.run_cmake(build_dir, prefix, flags=flags)
            sub_dir = self.get_path(build_dir, "build")
            self.run_ninja(sub_dir)
    
        def build_on_linux(self, build_dir, prefix, desc):
            """Build method for spdlog on linux"""
    
            flags = ["-S", ".", "-B", "build"]
            self.run_cmake(build_dir, prefix, flags=flags)
            sub_dir = self.get_path(build_dir, "build")
            self.run_ninja(sub_dir)
    
  • And building withing any problem with the command:
    $ nvp build libs spdlog -c clang
  • Check what is the sink for the default logger now…
    • Documentation page is: https://github.com/gabime/spdlog
    • ⇒ Should simply use:
      void replace_default_logger_example()
      {
          auto new_logger = spdlog::basic_logger_mt("new_default_logger", "logs/new-default-log.txt", true);
          spdlog::set_default_logger(new_logger);
          spdlog::info("new logger log message");
      }
  • Or in fact, I could use a multi-sink logger as default logger:
    void multi_sink_example()
    {
        auto console_sink = std::make_shared<spdlog::sinks::stdout_color_sink_mt>();
        console_sink->set_level(spdlog::level::warn);
        console_sink->set_pattern("[multi_sink_example] [%^%l%$] %v");
    
        auto file_sink = std::make_shared<spdlog::sinks::basic_file_sink_mt>("logs/multisink.txt", true);
        file_sink->set_level(spdlog::level::trace);
    
        spdlog::logger logger("multi_sink", {console_sink, file_sink});
        logger.set_level(spdlog::level::debug);
        logger.warn("this should appear in both console and file");
        logger.info("this message should not appear in the console, only in the file");
    }
  • In fact I was thinking of just using spdlog directly, but now I realize it's better if I keep it encapsulated in the LogManager anyway, which means I still need that class anyway.
  • Now replacing my macros for logINFO/logDEBUG/etc… Done
  • But now I realize that my Allocator class also needs the MemoryManager class 😭.
  • So I added the MemoryManager and then a few additional memory classes. And eventually got the minimal NervSeed app to run on windows.
  • But now I'm on Linux, and here I have some additional issues to compile the code… investigating.
  • ⇒ Anyway, after some more trouble I finally got it “sort of” running on linux too (but ending with a core dump due to a double free issue 😥):
    kenshin@rog:~/projects/NervLand/dist$ ls -l
    total 512
    -rw-r--r-- 1 kenshin kenshin 502736 juil. 21 23:26 libnvCore.so
    -rwxr-xr-x 1 kenshin kenshin  17056 juil. 21 23:26 NervSeed
    kenshin@rog:~/projects/NervLand/dist$ ./NervSeed 
    ./NervSeed: error while loading shared libraries: libnvCore.so: cannot open shared object file: No such file or directory
    kenshin@rog:~/projects/NervLand/dist$ LD_LIBRARY_PATH=. ./NervSeed 
    Hello world!
    [2022-07-21 23:27:34.580] [info] Creating LogManager: 42
    free(): double free detected in tcache 2
    Aborted (core dumped)
  • To run the NervSeed executable above I could not simply execute the default script I created for that with:
    $ nvp nvl
  • ⇒ That script would try to call NervSeed.exe and will not set the LD_LIBRARY_PATH correctly, so I need to extend that now for it to work both on windows and linux.
    • Here is my updated script in the yaml config file:
      scripts:
        nvl:
          help: Execute NervLand
          windows_cmd: ${PROJECT_ROOT_DIR}/dist/NervSeed.exe
          linux_cmd: ${PROJECT_ROOT_DIR}/dist/NervSeed
          cwd: ${PROJECT_ROOT_DIR}/dist
          linux_env:
            LD_LIBRARY_PATH: ${PROJECT_ROOT_DIR}/dist
    • And here is the updated part in the runner component to handle the os specific env config (was already in place for the custom cmd):
              key = f"{self.platform}_env"
              env_dict = desc[key] if key in desc else desc.get('env', None)
      
              if env_dict is not None:
                  env = os.environ.copy()
                  for key, val in env_dict.items():
                      env[key] = self.fill_placeholders(val, hlocs)
      
  • OK, working just fine! And the dump error was easy to fix: I just need to ensure that I'm destroy the LogManager instance before exiting (and this is expected because its a RefObject anyway):
    #include <core_common.h>
    #include <iostream>
    
    auto main(int /*argc*/, char* /*argv*/[]) -> int {
        std::cout << "Hello world!" << std::endl;
        logINFO("Creating LogManager here.");
        nv::LogManager::instance();
    
        nv::LogManager::destroy();
    
        return 0;
    }
  • This is a good start, but there is still a lot to do to build this NervLand project.
  • I guess the next step would be to restore the NervApp class and add support to read JSON or YAML config files,
  • But this post is already way too long, so as usual, let's stop here and continue this implementation in another dev session. See ya ! ✌️
  • blog/2022/0722_nervland_refreshing_memory.txt
  • Last modified: 2022/07/22 07:40
  • by 127.0.0.1