blog:2022:0615_crypto_ganache_cli_support

# Crypto: adding support for ganache-cli

Hi guys! So as discussed at the end of my previous article, I want to add support for the deployment/usage of ganache-cli to be able to test my contracts effectively. This is what we will focus on in this new post.

• ganache-cli (available at: https://github.com/trufflesuite/ganache) is a NodeJs package, so first thing I need is a mechanism to deploy nodejs environments in a similar way to how I deploy my python environment, so let's implement that.
• Here is the initial format I selected for the configuration entries:
  "nodejs_envs": {
"ganache_env": {
"nodejs_version": "16.15.1",
"packages": ["ganache-cli"]
}
},
• Next I need a NodejsManager component:
"""NodeJs manager component"""
import logging

from nvp.nvp_component import NVPComponent
from nvp.nvp_context import NVPContext

logger = logging.getLogger(__name__)

def create_component(ctx: NVPContext):
"""Create an instance of the component"""
return NodeJsManager(ctx)

class NodeJsManager(NVPComponent):
"""NodeJsManager component"""

def __init__(self, ctx: NVPContext):
"""Script runner constructor"""
NVPComponent.__init__(self, ctx)

def get_env_desc(self, env_name):
"""Retrieve the desc for a given environment"""
# If there is a current project we first search in that one:
proj = self.ctx.get_current_project()
desc = None
if proj is not None:
desc = proj.get_nodejs_env(env_name)

if desc is None:
# Then search in all projects:
projs = self.ctx.get_projects()
for proj in projs:
desc = proj.get_nodejs_env(env_name)
if desc is not None:
break

if desc is None:
all_envs = self.config.get("nodejs_envs", {})
desc = all_envs.get(env_name, None)

assert desc is not None, f"Cannot find nodejs environment with name {env_name}"
return desc

def get_env_dir(self, env_name, desc=None):
"""Retrieve the installation dir for a given nodejs env."""
if desc is None:
desc = self.get_env_desc(env_name)

default_env_dir = self.get_path(self.ctx.get_root_dir(), ".nodeenvs")
return desc.get("install_dir", default_env_dir)

def setup_nodejs_env(self, env_name, env_dir=None, renew_env=False, do_update=False):
"""Setup a given nodejs environment"""

desc = self.get_env_desc(env_name)

if env_dir is None:
# try to use the install dir from the desc if any or use the default install dir:
env_dir = self.get_env_dir(env_name, desc)

# Ensure the parent folder exists:
self.make_folder(env_dir)

# create the env folder if it doesn't exist yet:
dest_folder = self.get_path(env_dir, env_name)

tools = self.get_component("tools")
new_env = False

if self.dir_exists(dest_folder) and renew_env:
logger.info("Removing previous python environment at %s", dest_folder)
self.remove_folder(dest_folder)

if not self.dir_exists(dest_folder):
# Should extract the nodejs package first:
vers = desc['nodejs_version']
ext = ".7z" if self.is_windows else ".tar.xz"
suffix = "win-x64" if self.is_windows else "linux-x64"
base_name = f"node-v{vers}-{suffix}"
filename = f"{base_name}{ext}"

pkg_dir = self.get_path(self.ctx.get_root_dir(), "tools", self.platform)
pkg_file = self.get_path(pkg_dir, filename)
if not self.file_exists(pkg_file):
url = f"https://nodejs.org/dist/v{vers}/{filename}"

logger.info("Installing nodejs version %s...", vers)
tools.extract_package(pkg_file, env_dir, target_dir=dest_folder, extracted_dir=base_name)
new_env = True

# py_path = self.get_path(dest_folder, pdesc['sub_path'])

# if new_env or do_update:
#     # trigger the update of pip:
#     logger.info("Updating pip...")
#     self.execute([py_path, "-m", "pip", "install", "--upgrade", "pip"])

# # Next we should prepare the requirements file:
# req_file = self.get_path(dest_folder, "requirements.txt")
# content = "\n".join(desc["packages"])
# self.write_text_file(content, req_file)

# logger.info("Installing python requirements...")
# self.execute([py_path, "-m", "pip", "install", "-r", req_file])

def process_cmd_path(self, cmd):
"""Process a given command path"""

if cmd == "setup":
env_name = self.get_param("env_name")
logger.info("Should setup environment %s here.", env_name)
env_dir = self.get_param("env_dir")
renew_env = self.get_param("renew_env", False)
do_update = self.get_param("do_update", False)
self.setup_nodejs_env(env_name, env_dir, renew_env, do_update)
return True

if cmd == "remove":
env_name = self.get_param("env_name")
logger.info("Should remove environment %s here.", env_name)
return True

return False

if __name__ == "__main__":
# Create the context:
context = NVPContext()

# Add our component:
comp = context.register_component("nodejs", NodeJsManager(context))

context.define_subparsers("main", ["setup", "remove"])

psr = context.get_parser('main.setup')
help="Name of the environment to setup")
psr = context.get_parser('main.remove')
help="Name of the environment to remove")

comp.run()

• One complication I got on this path is that now I'm making that NodeJsManager component fully dynamic, and as such it would get the default NVP context config details, so not load the nvp_config.json containing the nodejs env details…
• ⇒ My first idea to solve the issue above was to now also load the sub projects even when not in the main NVPContext, and this works:
$nvp nodejs setup ganache_env 2022/06/12 07:01:12 [nvp.nvp_project] ERROR: Cannot load project NervHome: exception: No module named 'PIL' 2022/06/12 07:01:12 [__main__] INFO: Should setup environment ganache_env here. 2022/06/12 07:01:12 [__main__] INFO: Downloading nodejs version 16.15.1 for windows... 2022/06/12 07:01:12 [nvp.components.tools] INFO: Downloading file from https://nodejs.org/dist/v16.15.1/node-v16.15.1-win-x64.7z... [==================================================] 17121974/17121974 100.000% 2022/06/12 07:01:34 [__main__] INFO: Installing nodejs version 16.15.1... 2022/06/12 07:01:34 [nvp.components.tools] INFO: Extracting D:\Projects\NervProj\tools\windows\node-v16.15.1-win-x64.7z... • But as reported above the “NervHome” project cannot be loaded in the target python env because of missing packages ('PIL' being the first one here), so this is a bit annoying… And I'm wondering if this is really the best option I have 🤔? • ⇒ In fact the exception comes from the loading of the nvp_plug.py file, which itself if registering a bunch of non-dynamic components in the context: """ NVP plug entrypoint module for NervHome """ import logging from components.navision import Navision from components.backup_manager import BackupManager from components.gif_resizer import GifResizer from components.file_renamer import FileRenamer from components.movie_handler import MovieHandler from components.file_dedup import FileDedup from components.picture_handler import PictureHandler from components.password_generator import PasswordGenerator from components.text_translator import TextTranslator logger = logging.getLogger('NervHome') def register_nvp_plugin(context, proj): """This function should register this plugin in the current NVP context""" logger.debug("Registering NervHome NVP plugin.") # Note that we register this as a context component so that it is not only # available on the nervhome project: context.register_component('navision', Navision(context, proj)) context.register_component('backup', BackupManager(context, proj)) context.register_component('gif_resizer', GifResizer(context, proj)) context.register_component('file_renamer', FileRenamer(context, proj)) context.register_component('movie_handler', MovieHandler(context, proj)) context.register_component('file_dedup', FileDedup(context, proj)) context.register_component('picture_handler', PictureHandler(context, proj)) context.register_component('password_gen', PasswordGenerator(context, proj)) context.register_component('text_translator', TextTranslator(context, proj)) • But that behavior is really obsolete now: I should rather enforce using scripts and load those components dynamically only when I really need them. So for the moment, I think I should disable that nvp_plug system when constructing non-primary NVPContexts 👍:  # Note: the nvp_plug system bellow is obsolete and should be removed eventually: if ctx.is_master_context() and self.file_exists(proj_path, "nvp_plug.py"): # logger.info("Loading NVP plugin from %s...", proj_name) try: sys.path.insert(0, proj_path) plug_module = import_module("nvp_plug") plug_module.register_nvp_plugin(ctx, self) sys.path.pop(0) # Remove the module name from the list of loaded modules: del sys.modules["nvp_plug"] except ModuleNotFoundError as err: logger.error("Cannot load project %s: exception: %s", self.get_name(False), str(err)) • OK, so now I can install an nodejs env with ganache in it: $ nvp nodejs setup ganache_env
2022/06/12 11:21:53 [__main__] INFO: Installing nodejs version 16.15.1...
2022/06/12 11:21:53 [nvp.components.tools] INFO: Extracting D:\Projects\NervProj\tools\windows\node-v16.15.1-win-x64.7z...
npm WARN config global --global, --local are deprecated. Use --location=global instead.

changed 20 packages, and audited 203 packages in 7s

11 packages are looking for funding
run npm fund for details

found 0 vulnerabilities
2022/06/12 11:22:10 [__main__] INFO: Installing packages: ['ganache']

changed 1 package, and audited 203 packages in 2s

11 packages are looking for funding
run npm fund for details

found 0 vulnerabilities
• Next I have to prepare a script to actually run ganache:
    "ganache": {
"nodejs_env": "ganache_env",
"cmd": "${NODE}${NODE_ENV_DIR}/node_modules/ganache/dist/node/cli.js"
}
• And I'm injectnig the required placeholders from the runner component:
        if "nodejs_env" in desc:
nodejs = self.get_component("nodejs")
env_name = desc['nodejs_env']
env_dir = nodejs.get_env_dir(env_name)
node_root_dir = self.get_path(env_dir, env_name)
hlocs["${NODE_ENV_DIR}"] = node_root_dir node_path = nodejs.get_node_path(env_name) hlocs["${NODE}"] = node_path
hlocs["${NPM}"] = f"{node_path} {node_root_dir}/node_modules/npm/bin/npm-cli.js" • And this works just fine already 🥳: $ nvp ganache
ganache v7.3.0 (@ganache/cli: 0.4.0, @ganache/core: 0.4.0)
Starting RPC server

Available Accounts
==================
(0) 0x4e888652c47b053fb5F72FA96E4036aB15c7cC93 (1000 ETH)
(1) 0x3E692CfBb5C1cf7B3997242C9eBE8C7Bd54880Be (1000 ETH)
(2) 0x89BfF5009A50cB154AAF4589BfbD8Ac6618f7E13 (1000 ETH)
(3) 0xb2913BbA1B4Bf57a68e7ddCD84022Cb67CBdee38 (1000 ETH)
(4) 0x7Db390db41fAAaa5483D0191b67d2e4EbB246e51 (1000 ETH)
(5) 0x070c06261E135dB3286419980f69CE9ED5b2Ee5E (1000 ETH)
(6) 0xab22A767342B93843D07B326A81C2461a5E33F9a (1000 ETH)
(7) 0xa5d98A88cBbFe0C947F00Cc0c8Ac14cFD2fAc5aA (1000 ETH)
(8) 0xc9F4b0e96229F11b9e2d60184Eb1261E411C91b5 (1000 ETH)
(9) 0x799B6F260Fc268843c99F582c43112b607B51865 (1000 ETH)

Private Keys
==================
(0) 0xfc9b50e51883c70185e9e3ab7f64c3d4f324fec136d480993ec03aeb041e9962
(1) 0x73e0a314c731c6b2ef0fac41bcc6957f715380b0d3a16bc48256c8e72ba341a7
(2) 0xe71659570a882dbb11952bda508e760a6c9a5720215c8acd8d514b3dd00a2b79
(3) 0xebe487198293d177879f6f67e47248e7abe0ca2cbcba5f7ddd9ce261ae7c73f3
(4) 0x5239f211a7aca12a63046ae9501f1dd2709226a1cc6bd5ec6258808383ac7065
(5) 0x1eea2b212bdc8541f72e886524dd771df29309548e11c9435d37abb719227f2f
(6) 0xfbcfffa982129d70b110a4d48234e1f95e733ea98b079244c0089b330bbd7fb6
(7) 0xa3141529424d2a9503f2b2da04b399d42793f459114e307a2e84b6588e971025
(8) 0x5bd642a37f0e1de30afc67a619f0541e0fcfa7541fca93655d094283a04e451f
(9) 0x5bcf805bcd57418327a22a35294750c99093ae837d627989fca21168b2ba3fe1

HD Wallet
==================
Mnemonic:      crew deposit army oxygen elevator common planet scene tribe sentence decrease nature
Base HD Path:  m/44'/60'/0'/0/{account_index}

Default Gas Price
==================
2000000000

BlockGas Limit
==================
30000000

Call Gas Limit
==================
50000000

Chain Id
==================
1337

RPC Listening on 127.0.0.1:8545
• The nice thing about ganache is that now I should be able to easily test some contracts like if there were deployed on the BSC mainnet: as far as I understand all I need for that is to use the fork feature like this:
ganache -f https://bsc-dataseed1.binance.org
• OK, next I should run the pair check on BSC but using the ganache provider url “127.0.0.1:8545”, so should be something like that:
nvp bchain check-pair -p "https://127.0.0.1:8545" WBNB/BUSD
• Ohh damn it… it's working for WBNB/BUSD:
$nvp bchain check-pair -p "http://127.0.0.1:8545" WBNB/BUSD 2022/06/12 16:18:44 [__main__] INFO: Using provider url http://127.0.0.1:8545 for chain bsc 2022/06/12 16:18:47 [nvh.crypto.blockchain.arbitrage_manager] INFO: PairChecker result for BUSD (against WBNB) is: PASSED • But it's also working for the WBNB/GENIX pair 😢: $ nvp bchain check-pair -p "http://127.0.0.1:8545" WBNB/GENIX
2022/06/12 16:19:13 [__main__] INFO: Using provider url http://127.0.0.1:8545 for chain bsc
2022/06/12 16:19:15 [nvh.crypto.blockchain.arbitrage_manager] INFO: PairChecker result for GENIX (against WBNB) is: PASSED
• while this is definitely not working when using a direct connection to the main net…. arrfff… stupid me… it's “GINUX” not “GENIX” lol, so let's try again. Cool! now this is failing as expected 👍:
$nvp bchain check-pair -p "http://127.0.0.1:8545" WBNB/GINUX 2022/06/12 16:24:51 [__main__] INFO: Using provider url http://127.0.0.1:8545 for chain bsc 2022/06/12 16:24:57 [nvh.crypto.blockchain.arbitrage_manager] ERROR: Pair checker test failed for GINUX: execution reverted: VM Exception while processing transaction: revert transferToken failed. 2022/06/12 16:24:57 [nvh.crypto.blockchain.arbitrage_manager] INFO: PairChecker result for GINUX (against WBNB) is: FAILED 2022/06/12 16:24:57 [nvh.crypto.blockchain.arbitrage_manager] INFO: Pair address: 0x85B446d3EDC3A7fe4db8A88649c14fdcB4e911dE, exchange: PancakeSwap2, router: 0x10ED43C718714eb63d5aA57B78B54704E256024E, ifp: 9975 • okay okay… So what do we need next ? I need to deploy a new version of the pairchecker contract on the ganache blockchain, but using the default account I have for arbitrage. • I thus added support to generate accounts/private keys from mnemonics:  def get_mnemonic_accounts(self, mnemonics, account_index=0, count=1): """Generate a account from a mnemonic""" w3 = Web3() w3.eth.account.enable_unaudited_hdwallet_features() res = [] for i in range(count): account = w3.eth.account.from_mnemonic(mnemonics, account_path=f"m/44'/60'/0'/0/{account_index+i}") res.append({"address": account.address, "private_key": account.key.hex()}) return res • And using that function I can indeed get the accounts created in a ganache session with a given list of mnemonic words. • And with that I could sent some funds from the default ganache account 0 to my “evm_arb” account using this kind of code: # use the first account to send some ethers: chain.set_account_address(accounts[0]['address']) chain.set_private_key(accounts[0]['private_key']) # Display the native balance: bal = chain.get_native_balance() print(f"Initial account0 balance: {bal} {chain.get_native_symbol()}") chain.transfer(999.0, "evm_arb") bal = chain.get_native_balance() print(f"Final account0 balance: {bal} {chain.get_native_symbol()}") chain.set_account("evm_arb") bal = chain.get_native_balance() print(f"evm_arb balance: {bal} {chain.get_native_symbol()}") • And now this is getting interesting: I was then able to deploy my PairCheckerV5 contract and assign it as the pair checker contract in the ArbitrageManager (all this running in jupyter): addr = chain.deploy_contract("PairCheckerV5") arbman.pair_checker_sc = chain.get_contract(addr, abi_file="ABI/PairCheckerV5.json") • Then I send some WBNB to that contract: # And we also have to send some WBNB to the pair address: # Send some BNB on the WBNB token: bal = t0.get_balance() print(f"Stage1 WBNB balance: {bal}") chain.transfer(0.2, t0.address()) bal = t0.get_balance() print(f"Stage2 WBNB balance: {bal}") # Now we transfer the WBNB token to the pair address: bal = t0.get_balance(account=addr) print(f"Stage3 pair initial WBNB balance: {bal}") t0.transfer(addr, t0.to_amount(0.2)) bal = t0.get_balance(account=addr) print(f"Stage4 pair final WBNB balance: {bal}") • And now I get the following new error message: $ arbman.check_pair(pair, t0)
2022/06/12 20:58:37 [nvh.crypto.blockchain.arbitrage_manager] ERROR: Pair checker test failed for GINUX: execution reverted: VM Exception while processing transaction: revert Pancake: INSUFFICIENT_LIQUIDITY
2022/06/12 20:58:37 [nvh.crypto.blockchain.arbitrage_manager] INFO: PairChecker result for GINUX (against WBNB) is: FAILED
2022/06/12 20:58:37 [nvh.crypto.blockchain.arbitrage_manager] INFO: Pair address: 0x85B446d3EDC3A7fe4db8A88649c14fdcB4e911dE, exchange: PancakeSwap2, router: 0x10ED43C718714eb63d5aA57B78B54704E256024E, ifp: 9975
• So, “INSUFFICIENT_LIQUIDITY” is what I should be looking for next!
In the process I also started to use https://account.getblock.io/ as provider url: this was necessary to avoid those continuous “missing trie node” errors in ganache because some archive node was not available anymore in the base node. Update: well I spoke too quickly, I still eventually get that error anyway with getblock.io, so no luck.
• Hmmm… so, checking the code for the pair WBNB/GINUX at 0x85B446d3EDC3A7fe4db8A88649c14fdcB4e911dE, the only location where we have that message is at the beginning of the swap function:
    function swap(
uint256 amount0Out,
uint256 amount1Out,
bytes calldata data
) external lock {
require(
amount0Out > 0 || amount1Out > 0,
"Pancake: INSUFFICIENT_OUTPUT_AMOUNT"
);
(uint112 _reserve0, uint112 _reserve1, ) = getReserves(); // gas savings
require(
amount0Out < _reserve0 && amount1Out < _reserve1,
"Pancake: INSUFFICIENT_LIQUIDITY"
);
• So what could that mean ? Am i really requesting to swap larger amounts than what we have in the reserves ?
• Well the reserves are very large at least:
\$ pobj = chain.get_pair(addr="0x85B446d3EDC3A7fe4db8A88649c14fdcB4e911dE")
pobj.get_reserves()
(107867324822485882037, 79318074672041400795394145201)
• So could it be my solidity code is messed up then ? I need to have a look.
• Note: the nice thing I just discovered in solidity is how to send arbitrary uint256 values in your revert messages, which is vevy handy for debugging:
• First you need to import the string library from “@openzeppelin/contracts/utils/Strings.sol”
• And then all you really need is the following kind of code:
        revert(
string(
abi.encodePacked(
"r0=",
Strings.toString(r0),
", r1=",
Strings.toString(r1)
)
)
);
• And you should then get this kind of revert message:
VM Exception while processing transaction: revert r0=125324017036209626383, r1=81185631560566122939989913391
• I'm just not quite sure yet about the max length of the string you can write… we'll see eventually I guess.
• Going on with my investigations, I eventually reach the point where I swap the source token for the dest token on the pair and at that point I got the error:
2022/06/13 07:43:26 [nvh.crypto.blockchain.arbitrage_manager] ERROR: Pair checker test failed for GINUX: execution reverted: VM Exception while processing transaction: revert [error] fn=swapTokens0NFL
2022/06/13 07:43:26 [nvh.crypto.blockchain.arbitrage_manager] INFO: PairChecker result for GINUX (against WBNB) is: FAILED
2022/06/13 07:43:26 [nvh.crypto.blockchain.arbitrage_manager] INFO: Pair address: 0x85B446d3EDC3A7fe4db8A88649c14fdcB4e911dE, exchange: PancakeSwap2, router: 0x10ED43C718714eb63d5aA57B78B54704E256024E, ifp: 9975
• ⇒ This is unexpected because the source dex is PancakeSwap2 so we do have support for flashloan there, and indeed force the alternate value for the stype in the following code removed that error:
        // If stype==0 it means we have FL support:
if (stype == 0) {
swapTokens0FL(
pair,
swapAmount0,
swapAmount1,
new bytes(0)
);
} else {
swapTokens0NFL(
pair,
swapAmount0,
swapAmount1,
new bytes(0)
);
}
/
• So, this means I'm not sending the correct stype value when testing the pair ? Shit I'm sending the opposite 😅:
stype = 1 if dex0.is_flash_loan_supported() else 0
• Checking the assembly version… Well, in the assembly version we were using the correct order apparently:
            if eq(stype, 0) {
mstore(ptr, shl(224, 0x6d9a640a)) // for sig "swap(uint256,uint256,address)"
}
if eq(stype, 1) {
mstore(ptr, shl(224, 0x022c0d9f)) // for sig "swap(uint256,uint256,address,bytes)"
}
• ⇒ So we should keep stype==0 for NFL and stype==1 for FL
• Okay, now next point on the list: After the first swap from srcToken to destToken, the updated reserves do not always match the default expectation:
• I would expect r0b = r0+amount and r1b = r1 - amountOut but it seems that some token may behave differently for the reserves in dstToken, for instance for GINUX we find: r1b - (r1-amountOut) == 11214770857 when we use an amountOut value of 63749319922005. ⇒ Interesting 🤔 But how could that happen in the code ? I need to find a clear explanation…
• Yes! I think I get it now: it all happens in the “safe transfer” part before we do the actual swapping (inside the swap() function):
            if (amount0Out > 0) _safeTransfer(_token0, to, amount0Out); // optimistically transfer tokens
if (amount1Out > 0) _safeTransfer(_token1, to, amount1Out); // optimistically transfer tokens
• ⇒ This part will call the underlying transfer(address to, uint256 value) method from each token, and this is where funny things may happen.
• Checking the GINUX contract at, and indeed, in there, we have the function swapAndLiquify called during a transfer:
    function swapAndLiquify(uint256 contractTokenBalance) private lockTheSwap {
// split the contract balance into halves
uint256 half = contractTokenBalance.div(2);
uint256 otherHalf = contractTokenBalance.sub(half);

// capture the contract's current ETH balance.
// this is so that we can capture exactly the amount of ETH that the
// swap creates, and not make the liquidity event include any ETH that
// has been manually sent to the contract
uint256 initialBalance = address(this).balance;

// swap tokens for ETH
swapTokensForEth(half); // <- this breaks the ETH -> HATE swap when swap+liquify is triggered

// how much ETH did we just swap into?
uint256 newBalance = address(this).balance.sub(initialBalance);

// add liquidity to uniswap

emit SwapAndLiquify(half, newBalance, otherHalf);
}
• ⇒ That's most definitely were the updated reserves come from!
• This means that, when doing a swap, we can never be really sure about the reserves available after doing the swap operation, I definitely should keep that in mind (and this was naturally not really taken care of in the PairChecker contract so far)
• And in fact, I'm now realizing that something funny may also happen simply when I transfer the dstToken I received from the first swap back to the pair contract to swap it back to srcToken!
• And in fact, even during the first swap operation itself, there is no guarantee that our account will receive the amountOut dstToken that we requested! We may receive far less than that 😳, So I need to check that part too. And indeed, in the case of GINUX after our first swap we get:
expected=63955597073879
got=61397373190925
(expected-got)/expected

0.03999999999998186
• ⇒ So this is an extra fee of 4%! So we could stop just here already for that token. But let's ignore this issue and continue with the updated implementation of the PairCheckerV5.
• Okay, so now I have an updated PairCheckerV5 contract that should report at the end a revert message with details on the swap operations as follow:
2022/06/13 12:37:23 [nvh.crypto.blockchain.arbitrage_manager] ERROR: Pair checker test failed for GINUX: execution reverted: VM Exception while processing transaction: revert [ok] r0=125574043306384440312, r1=81028894613374954895881896594, r0dt=0, r1dt=-2563240925341, swap_exp=64365469366655, swap_rcv=61790850591989, transfer_exp=61790850591989, transfer_rcv=59330139303662, amount_back=91716
• With those infos, I can check that:
• The reserves after swap have no deltas compared to the expectations: is; r0dt==0 and r1dt==0 (it's not the case for the GINUX token above)
• The swap expected/received values are the same: swap_exp==swap_rcv
• The transfer expected/received values are the same: transfer_exp=transfer_rcv
• the amount_back should only have lost the dex fees (applied twice)
• ⇒ So let's update the contract on the bsc mainnet now:
nvp bchain deploy -c bsc -a evm_arb PairCheckerV5
• OK, but then I have to send some fund to that new contract:
        for addr in self.quote_tokens:
val = dex.get_quote(1.0, addr, self.native_symbol)

logger.info("Quote token %s value: %.6g %s", token.symbol(), val, self.native_symbol)

# Get the current balance of qtoken in the pair checker:
bal = token.get_balance(as_value=False, account=self.pair_checker_address)

if bal == 0:
if token.get_balance(as_value=False) > 400000:
logger.info("Sending %s funds to pair checker contract...", token.symbol())
else:
logger.warning("Not enough %s funds to send to pair checker contract.", token.symbol())

bal = token.get_balance(as_value=False, account=self.pair_checker_address)
logger.info("PairChecker balance: %d %s", bal, token.symbol())
• And of course, after that I cannot read the revert message properly… 🤬 What the fuck…
• I tried to reduce the output message length, tried to get back to get back to solidity v0.6.6, and still, no revert message from mainnet. That is such a pain.
• But never mind: I will try something else now: I should also specify the min amount I expect to get back and only revert when this is not matched, from that I could build an estimation of the fees that are applied.
• Or maybe we need a function… inside our main function ? 🤔 let's try that. ⇒ bingo! With that it seems I can get my revert message as desired 😎!
• So for now I just enfore the failure from the “min_out” value and proceed as initially planned with the revert message in all cases.
• ⇒ Oh, and by the way, I updated the contract to version 6 in this process (since I had some confusion aroudn the ABI file at some point 😅)
• To be able to study the arb setup duration I created a new database table:
SQL_CREATE_ARB_SETUPS_TABLE = """ CREATE TABLE IF NOT EXISTS arb_setups (
id SERIAL PRIMARY KEY,
timestamp integer NOT NULL,
block_number integer NOT NULL,
p0addr char(42) NOT NULL,
p1addr char(42) NOT NULL,
duration SMALLINT NOT NULL,
profit_value float NOT NULL,
state SMALLINT NOT NULL
); """
• And now I have a dedicated function in the ArbitrageManager to check the state of the existing arb setups an insert them in the DB as needed:
    def check_current_setups(self):
"""Check the currently existing arb setups"""
if len(self.current_setups) == 0:
return

for arb in self.current_setups:
# get all the reserves of interest:

pair_reserves, _ = self.get_all_reserves(paddrs)
for arb in self.current_setups:
qtoken = arb["qtoken"]
am0, pval = self.compute_arb_profit(arb["p0"], arb["p1"], pair_reserves, qtoken)
if am0 is not None:
best_profit = qtoken.to_value(pval)

# Convert the qtoken profit value into native (wrapped) token value:

if am0 is None or best_profit < self.min_profit:
diff = self.last_block_number - arb["block_number"]
sym0 = arb["t0"].symbol()
sym1 = arb["t1"].symbol()
logger.info("Arb setup on %s/%s is gone after %d blocks.", sym0, sym1, diff)
arb["duration"] = diff
arb["done"] = True

# Add the setup in the db:
self.chain.get_db().insert_arb_setups([arb])

self.current_setups = [arb for arb in self.current_setups if "done" not in arb]
• And the first results seem to indicate that arb setups are usually gone on the next block after they appeared
• So after some additional time I could collect some more data suggesting that there might this be some room to place some arbitrage:
rows = db.get_all_arb_setups()
fig = plt.figure(figsize=(8, 4))
profits = [row[6] for row in rows]
dur = [row[5] for row in rows]
num = len(profits)
print(f"Collected {num} arb setups.")
ax1.plot(profits)
ax1.title.set_text("Profits")

ax2.plot(dur)
ax2.title.set_text("Duration")
plt.show()

A duration of “-1” in the graph above is the default value I'm writing when I'm not in “dry-run” mode and I'm actually trying to the arbitrage: in that case I will never when the arb setup will “naturally vanish” so I'm reporting a negative value to be able to separate those entries if I have to at some point.
• And so, I've been trying to rush it today to get the actual call to the to my FlashArb contract back on rails to get some “easy money”… But of course, that doesn't work at all 😭… in every single case, I get a message back (a log event) from my contract indicating that the reserves I was considering have changed to the point where the arb setup is not valid anymore. This is so desperating… 😢 I really had hope it would work at least partially.
• But at the same time, I'm wondering if it could be due to my flash arb contract itself: maybe I'm doing something wrong there ? ⇒ So maybe I should try to build a simpler version just checking for the reserves ?
• Hmmmm… Now I just reverved to my FlashArb version 4 address, and surprisingly the first result I get after a few seconds is a successfull arbitrage transaction! 🤪 So, maybe there is still hope here ?
• So first I should clearly identify which version of the contract I'm interacting with here, so I'm checking the runtime bytecode for that… but of course I don't seem to have a precise match with anything
• Arrffff… And now that V4 contract is also giving me failed arb setups 😭 Definitely not my day/week/month/year/decade… you name it lol.
• But now I'm starting to think about something else: what if I would only consider the pairs where we have an arb setup according to the last block on the chain, but that are not part of the pending transactions yet ? ⇒ That might help reducing the situations where the reserves just got updated, right ? 🤔 So I'm trying that.
• Ahhh… actually noticed that all my potential arb pairs were discarded, but then checking the block numbers it seems that we have the same number for both blocks! So the pending block returned is actually the last block ?!
• And indeed just checking the first transaction hash in both blocks we read the same thing:
2022/06/14 21:19:13 [__main__] INFO: hash1=0x8118f7b7a2ca099a05835cc54bbfd319daa1f46f9ae9ad6e5fda4ee00976a4c7
2022/06/14 21:19:13 [__main__] INFO: hash2=0x8118f7b7a2ca099a05835cc54bbfd319daa1f46f9ae9ad6e5fda4ee00976a4c7
2022/06/14 21:19:17 [__main__] INFO: hash1=0xdadd21f238aefe9564efeb1d29871a0bf02fed8135389833f7ba75d8d1fcc2ce
2022/06/14 21:19:17 [__main__] INFO: hash2=0xdadd21f238aefe9564efeb1d29871a0bf02fed8135389833f7ba75d8d1fcc2ce
• ⇒ So the idea I had will not work here. 😒
• And now I'm really starting to feel desperated about all this arbitrage stuff on BSC… The problem is, you have to compete continously with other entities with far better hardware, so you are always late compared to them to get the latest data, so by the time you can send a transaction, they have already collected the arbitrage funds.
• So I'm not quite sure what I should do next: maybe I should give it a break ? Or maybe I should simply give up… ? Problem is, if I give up, I have no plan B so far: no other idea or significant project to try to push forward to become financially independent, so, not quite sure either that failing is an option here 🤣. Arrff maybe I just need some rest…
• Update: when trying to monitor the timestamp from the latest blocks I'm receiving compared to my current time I'm now getting results that are very low and even negative sometimes, so maybe I should keep my fingers crossed 🤞:
2022/06/15 20:14:13 [__main__] INFO: Block age: 0.373568
2022/06/15 20:14:17 [__main__] INFO: Block age: -1.171809
2022/06/15 20:14:22 [__main__] INFO: Block age: 0.276598
2022/06/15 20:14:24 [__main__] INFO: Block age: -0.045503
2022/06/15 20:14:28 [__main__] INFO: Block age: 0.978560
2022/06/15 20:14:31 [__main__] INFO: Block age: 0.509058
2022/06/15 20:14:33 [__main__] INFO: Block age: -0.245053
2022/06/15 20:14:37 [__main__] INFO: Block age: 0.741160
2022/06/15 20:14:41 [__main__] INFO: Block age: 1.072904
2022/06/15 20:14:43 [__main__] INFO: Block age: 0.907354
2022/06/15 20:14:49 [__main__] INFO: Block age: 0.754012
2022/06/15 20:14:52 [__main__] INFO: Block age: 0.710214
2022/06/15 20:14:55 [__main__] INFO: Block age: 0.560893
• blog/2022/0615_crypto_ganache_cli_support.txt