Research Grant Extensions

Back in June, I received a $1k credit for researching a secure AI platform, SigilDERG. Since then I’ve built a Rust dataset builder, a model finetuner purpose-built for the Meta Llama 3.1 8B Instruct family, and a fork/port of OpenAI’s HumanEval benchmark, HumanEval-Rust, which I’m using to benchmark model performance on Rust code.

Phase 1 training of my Rust-QLoRA adapters is complete (details are on the model card at Hugging Face), and I’m now wiring that into a focused “Rust Mentor” model for testing and eventual deployment inside SigilDERG. The issue is that my initial research credit is being depleted before I can reach the MVP stage the grant was originally meant to cover.

I’m a solo developer; in the absence of full-time work, most of my time goes into earning my bachelor’s degree in software engineering and building out this ecosystem. I’m drafting a full write-up of the research and progress so far, but I don’t expect that document to be ready for another couple of weeks.

My question is: who is the right person to talk to about presenting this work and applying for a grant extension? I’m preparing a cost breakdown that shows how the original credits were used, what I expect to need going forward, and alternate hardware options for different Lambda configurations (e.g., single vs multi-H100). Links to all related repositories are pinned on my GitHub profile.

An extension would fund Phase 2 Rust-QLoRA training plus base-vs-finetuned HumanEval-Rust benchmarks, packaging SigilDERG as a turnkey, policy-enforcing “Rust Mentor” stack that other Lambda users can run, and a separate one-command package that spins up a single H100 instance with the entire end-to-end ecosystem pre-configured for reproducible peer review.

Thank you for your time and attention. If anyone would like to reach out directly, my email should be in my profile.

Respectfully,
Dave Tofflemire