public:projects:nervland:notes:0008_nvland_windowing

NervLand: Windowing system

  • We should use the winit crate for our windowing system.
  • And we should try to support multiple windows ?
  • We will encapsulate the windowing system inside the nvcore crate:
    $ cargo new nvcore --lib
  • And we should run the main app from the nvland crate:
    $ cargo new nvland --bin
  • Found that repo on Rust Design patterns: https://github.com/lpxxn/rust-design-pattern
    • singleton pattern included there.
  • ⇒ We should use the log crate to handle logging in out app.
  • Trying to run our event loop in a dedicated thread doesn't work, the correct mechanism is to send the events (from inside the loop) to another thread: https://github.com/rust-windowing/winit/blob/master/examples/multithreaded.rs
  • We also need a RenderContext singleton in our app:
  • ⇒ Actually we need an App singleton!
  • My my… this whole RefCell, Option, Arc , Mutex stuff is driving me crazy 😅: still can't get my simple “window manager” in an “App” to work as desired… ⇒ Time to reconsider all this more carefully.
  • RustConf 2018 - Closing Keynote - Using Rust For Game Development by Catherine West:
  • Found the signals2 library: https://crates.io/crates/signals2
  • Built a somewhat maybe not so optimal solution to handle resizing the underlying wgpu surface when resizing a window with signals:
        pub fn resize(&self, new_size: PSize<u32>) {
            debug!(
                "Should handle '{}' window resize to {:?}",
                self.name, new_size
            );
            self.on_resize.emit(new_size);
        }
  • I also need to add my new RenderManager into the App: Should just provide a raw reference, like the WindowManager:
  • Note: WindowManager and RenderManager should have only interior mutability: so we should not need to get a mut on them, so we should not need a RefCell on them either.
  • Do we need an Rc on them ? ⇒ Since these are static elements, it should really be possible to store them naked, and return a reference on them from anywhere.
  • Hmmmm, okay, it seems the following code will not complain at least:
        pub fn get_render_manager(&self) -> &RenderManager {
            let handle_ptr: *const RenderManager = &self.render_manager;
            unsafe { &*handle_ptr }
        }
  • Arrff, no: when using it then we still get the “App does not live long enough compile error.”
  • But then it works if we just turn the result from get_render_manager to a static lifetime 👍:
        pub fn get_render_manager(&self) -> &'static RenderManager {
            let handle_ptr: *const RenderManager = &self.render_manager;
            unsafe { &*handle_ptr }
        }
  • Note: This would definitely not work if we return only a simple ref on our render_manager member.
  • As a signle line we can write:
        pub fn get_render_manager(&self) -> &'static RenderManager {
            unsafe { &*(&self.render_manager as *const RenderManager) }
        }
  • Currently we get a list of errors when existing our app due to the event_loop directly exiting the process ⇒ killing the threads and preventing the drop of the app resources.
  • ⇒ To fix this we should release the app resource on destruction of the event loop closure.
  • Moved EventLoop in app.rs module an implemented exit handler to release the resources:
    struct ExitHandler {}
    
    impl Drop for ExitHandler {
        fn drop(&mut self) {
            App::release();
            debug!("Exiting application.");
        }
    }
  • ⇒ We still have an issue wit leaking memory unfortunately:
    2022-03-03T22:25:19.865Z DEBUG [nvcore::app@main] Releasing App resources...
    2022-03-03T22:25:19.865Z DEBUG [nvcore::display@main] Dropping WindowManager.
    2022-03-03T22:25:19.866Z DEBUG [nvcore::render@main] Dropping RenderManager.
    2022-03-03T22:25:19.866Z DEBUG [nvcore::app@main] Exiting application.
    2022-03-03T22:25:19.867Z INFO [wgpu_core::hub@main] Dropping Global
    2022-03-03T22:25:19.869Z INFO [wgpu_core::device@main] Destroying 2 command encoders
    2022-03-03T22:25:19.900Z INFO [wgpu_hal::vulkan::instance@main] GENERAL [Loader Message (0x0)]
            Unloading layer library C:\Windows\System32\DriverStore\FileRepository\nv_dispig.inf_amd64_015fa42d67826549\.\nvoglv64.dll
    2022-03-03T22:25:19.901Z INFO [wgpu_hal::vulkan::instance@main]         objects: (type: INSTANCE, hndl: 0x23cd14e9310, name: ?)
    2022-03-03T22:25:19.956Z WARN [wgpu_hal::dx12::instance@main] Process is terminating. Using simple reporting. Please call ReportLiveObjects() at runtime for stand
    ard reporting.
    2022-03-03T22:25:19.956Z WARN [wgpu_hal::dx12::instance@main] Live Producer at 0x0000023CD49DF018, Refcount: 2.
    2022-03-03T22:25:19.957Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4C99FB0, Refcount: 0.
    2022-03-03T22:25:19.957Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4CFBAD0, Refcount: 0.
    2022-03-03T22:25:19.958Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4CE49F0, Refcount: 0.
    2022-03-03T22:25:19.958Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD3DAC400, Refcount: 0.
    2022-03-03T22:25:19.958Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4CF6610, Refcount: 0.
    2022-03-03T22:25:19.959Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD1526E70, Refcount: 0.
    2022-03-03T22:25:19.959Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4CF7110, Refcount: 0.
    2022-03-03T22:25:19.959Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD1527090, Refcount: 0.
    2022-03-03T22:25:19.960Z WARN [wgpu_hal::dx12::instance@main] Live                         Object :      8
    2022-03-03T22:25:19.960Z WARN [wgpu_hal::dx12::instance@main] Live Producer at 0x0000023CD4EA8908, Refcount: 2.
    2022-03-03T22:25:19.960Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4ED4C50, Refcount: 0.
    2022-03-03T22:25:19.960Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4EEAE60, Refcount: 0.
    2022-03-03T22:25:19.961Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4F14D80, Refcount: 0.
    2022-03-03T22:25:19.961Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD3DAEF20, Refcount: 0.
    2022-03-03T22:25:19.961Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4EECE80, Refcount: 0.
    2022-03-03T22:25:19.961Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD1529D30, Refcount: 0.
    2022-03-03T22:25:19.962Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD4F197F0, Refcount: 0.
    2022-03-03T22:25:19.962Z WARN [wgpu_hal::dx12::instance@main]   Live Object at 0x0000023CD1527B30, Refcount: 0.
    2022-03-03T22:25:19.962Z WARN [wgpu_hal::dx12::instance@main] Live                         Object :      8
  • Hmmm, interesting: I could actually reproduce those memory leaks in the tutorial9-models and then even tutorial2-surface from https://github.com/sotrh/learn-wgpu
  • ⇒ I added a SimpleLogger interface to report debug messages above, but maybe this could also work with the env_logger ?
  • Yes indeed: we just need to first export:
    $ export RUST_LOG="wgpu=debug"
  • Okay, so we might have some wrong going on with the DX12 libraries loaded here:
            let lib_main = native::D3D12Lib::new().map_err(|_| crate::InstanceError)?;
    
            let lib_dxgi = native::DxgiLib::new().map_err(|_| crate::InstanceError)?;
  • Which means we need to checkout the d3d12 package repo and try to update that.
  • from wgpu_hal/src/dx12/instance.rs if I comment this section, the error goes away:
                // Enable debug layer
                match lib_main.get_debug_interface() {
                    Ok(pair) => match pair.into_result() {
                        Ok(debug_controller) => {
                            debug_controller.enable_layer();
                            debug_controller.Release();
                        }
                        Err(err) => {
                            log::warn!("Unable to enable D3D12 debug interface: {}", err);
                        }
                    },
                    Err(err) => {
                        log::warn!("Debug interface function for D3D12 not found: {:?}", err);
                    }
                }
  • ⇒ So this really means we are keeping some refs that we should not keep ?
  • Anyway, for now we can avoid that issue simply by forcing the use of vulkan backend on windows:
            let instance = wgpu::Instance::new(
                wgpu::Backends::VULKAN | wgpu::Backends::METAL | wgpu::Backends::BROWSER_WEBGPU,
            );
  • To render something on screen we need to setup a render pass.
  • But can we assign multiple render pipelines/compute pipelines to a single render pass ? ⇒ YES we have an example of that in wgpu.
  • So we will need to create CommandEncoder object in separated threads.
  • Then we need a mechanism to construct RenderPasses in those command encoders.
  • The in those render passes we can assign multiple Pipelines.
  • Each pipeline basically correspond to 1 type of shader processing we want to perform.
  • It seems the CommandEncoder are single use only in wgpu
  • Reading on how to structure a project with modules as this still doesn't seem to make sense to me: https://dev.to/ghost/rust-project-structure-example-step-by-step-3ee
  • Updated app construction to use an encapsulated AppContext and allowing convert to mut reference in release function:
    struct AppContext {
        window_manager: WindowManager,
        render_manager: RenderManager,
        components: HashMap<ComponentType, ComponentCell>,
    }
    
    impl AppContext {
        fn new() -> Self {
            AppContext {
                window_manager: WindowManager::new(),
                render_manager: block_on(RenderManager::new()),
                components: HashMap::new(),
            }
        }
    }
    
    pub struct App {
        context: Option<AppContext>,
        get_raw_ptr: Box<dyn Fn() -> *mut App>,
    }
    
    impl App {
        pub fn instance() -> &'static App {
            static mut SINGLETON: MaybeUninit<App> = MaybeUninit::uninit();
            static ONCE: Once = Once::new();
    
            ONCE.call_once(|| unsafe {
                info!("Creating App singleton.");
                // prepare here a destruction callback
                let get_raw_ptr = || SINGLETON.as_mut_ptr();
    
                SINGLETON.as_mut_ptr().write(App {
                    context: Some(AppContext::new()),
                    get_raw_ptr: Box::new(get_raw_ptr),
                });
            });
    
            unsafe { &*SINGLETON.as_ptr() }
        }
    
        pub fn release() {
            debug!("Releasing App resources...");
            let app = unsafe { &mut *(*App::instance().get_raw_ptr)() };
            app.context = None;
            debug!("Done releasing app.");
        }
    }
    
  • ⇒ I feel the trick I implemented above in the release function could be somewhat useful in many cases in Rust in fact: where you want to allow “internal mutability” only in a few specific methods and don't want to pay the cost of the RefCell (?) (note that I might be completely wrong, but still as a C++ developper this seems like the kind of responsability i'm willing to take here lol.)
  • Now I should continue with a refactoring of how I should “render things”: we have our event loop taking ownership of the application main thread: I was initially thinking about moving the render processing into it's own thread and then have the event loop send some event to that thread when needed, but the render process would anyway have to handle those events interleaved with the continuous rendering stuff, and introducing multithreading that early in the app would already break compatibility with a webapp deployment (which is an option I would like to keep if possible) ⇒ so I'm now thinking I should simply embed our render processing into the main event loop when there is nothing else to do (?)
  • OK, so we are back to a simple (working) clear surface example. Now time for some code clean up: OK
  • … hmmmm… and here we are 😁: This was fun, but I'm done already now 😲🤣! While trying to think/desing a Task system, I realized that I'm not ready to do this kind of task in Rust: to me, the language feels like it is trying to babysit the developpers: always protecting them of anything wrong: I guess it could be nice when you're a beginner. It may also be nice if you are an absolute expert and you know how to deal with the “borrow checker” appropriately. But for me, it's definitely not my cup of tea for the moment: I need to move fast, I know how to deal with all (or most of at least) the troubles from low level programming in C++, so I really don't need that borrow checker in my way or Arc<Mutex<T» containers everywhere which make me feel I'm building the next biggest gas plant in the world! 😅 So: “Rust, thank you, but no thank you!” ✌️
  • Which lead me to re-considering the options I have left once more:
    • typescript/javascript and a webapp built: out
    • Rust and potentially WASM eventually: out
    • So, the only real option I have left is to… go back to C++ then ?! My my my… I really need to get this working this time.
  • public/projects/nervland/notes/0008_nvland_windowing.txt
  • Last modified: 2022/03/07 20:42
  • by 127.0.0.1