[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-deep-evm-9-huff-primer-macros-labels-opcodes":3},{"article":4,"author":54},{"id":5,"category_id":6,"title":7,"slug":8,"excerpt":9,"content_md":10,"content_html":11,"locale":12,"author_id":13,"published":14,"published_at":15,"meta_title":7,"meta_description":16,"focus_keyword":17,"og_image":18,"canonical_url":18,"robots_meta":19,"created_at":15,"updated_at":15,"tags":20,"category_name":34,"related_articles":35},"d0000000-0000-0000-0000-000000000109","a0000000-0000-0000-0000-000000000002","Deep EVM #9: Huff Language Primer — Macros, Labels, and Raw Opcodes","deep-evm-9-huff-primer-macros-labels-opcodes","A hands-on introduction to Huff, the low-level EVM assembly language that gives you direct control over every opcode, every byte of bytecode, and every gas unit.","## Why Huff Exists\n\nSolidity is a wonderful abstraction — until it is not. When you need a contract that fits inside 100 bytes of runtime bytecode, dispatches functions in O(1) with a packed jump table, or shaves 200 gas off a hot path that executes millions of times per day, you need something closer to the metal. That something is **Huff**.\n\nHuff is a low-level EVM assembly language with a thin macro system bolted on top. It does not have variables, types, or a compiler that optimizes behind your back. What you write is what ends up on chain — opcode for opcode.\n\n## Installing Huff\n\nThe canonical compiler is `huffc`, written in Rust:\n\n```bash\ncurl -L get.huff.sh | bash\nhuffup\nhuffc --version\n```\n\nThis installs `huffc` to `~\u002F.huff\u002Fbin`. Add it to your PATH and verify:\n\n```bash\n$ huffc --version\nhuffc 0.3.2\n```\n\nYou can also use Huff inside Foundry projects with `foundry-huff`, which lets you deploy `.huff` files the same way you deploy `.sol` files.\n\n## Hello World: A Minimal Contract\n\nLet us write a contract that returns the 32-byte word `0x01` to any call:\n\n```huff\n#define macro MAIN() = takes(0) returns(0) {\n    0x01            \u002F\u002F [0x01]\n    0x00            \u002F\u002F [0x00, 0x01]\n    mstore          \u002F\u002F []          — memory[0x00..0x20] = 0x01\n    0x20            \u002F\u002F [0x20]\n    0x00            \u002F\u002F [0x00, 0x20]\n    return          \u002F\u002F halt — return memory[0x00..0x20]\n}\n```\n\nCompile:\n\n```bash\nhuffc src\u002FHelloWorld.huff -r\n```\n\nThe `-r` flag outputs the runtime bytecode. You will see something like `600160005260206000f3` — 10 bytes. A Solidity contract returning `1` compiles to roughly 200+ bytes of runtime bytecode because solc emits a full function dispatcher, metadata hash, free memory pointer setup, and ABI encoder.\n\n## Macros vs Functions\n\nHuff has two code-reuse primitives: **macros** and **functions**.\n\n### Macros (`#define macro`)\n\nMacros are inlined at every call site. No JUMP overhead, no extra gas — the compiler literally copy-pastes the opcodes into the caller. This is the default and the preferred choice for gas-critical code.\n\n```huff\n#define macro REQUIRE_NOT_ZERO() = takes(1) returns(0) {\n    \u002F\u002F takes: [value]\n    continue        \u002F\u002F [continue_dest, value]\n    jumpi           \u002F\u002F []  — jump if value != 0\n    0x00 0x00 revert\n    continue:\n}\n```\n\n### Functions (`#define fn`)\n\nFunctions generate an actual JUMP\u002FJUMPDEST pair. They save bytecode size at the expense of ~22 extra gas per call (8 for JUMP + 1 for JUMPDEST + stack manipulation). Use them only when bytecode size matters more than gas.\n\n```huff\n#define fn safe_add() = takes(2) returns(1) {\n    \u002F\u002F takes: [a, b]\n    dup2 dup2       \u002F\u002F [a, b, a, b]\n    add             \u002F\u002F [sum, a, b]\n    dup1            \u002F\u002F [sum, sum, a, b]\n    swap2           \u002F\u002F [a, sum, sum, b]\n    gt              \u002F\u002F [overflow?, sum, b]\n    overflow jumpi\n    swap1 pop       \u002F\u002F [sum]\n    back jump\n    overflow:\n        0x00 0x00 revert\n    back:\n}\n```\n\n## Labels and Jump Destinations\n\nLabels in Huff are named JUMPDEST locations. The compiler resolves them to concrete bytecode offsets at compile time.\n\n```huff\n#define macro LOOP_EXAMPLE() = takes(1) returns(1) {\n    \u002F\u002F takes: [n]\n    0x00                \u002F\u002F [acc, n]\n    loop:\n        dup2            \u002F\u002F [n, acc, n]\n        iszero          \u002F\u002F [n==0?, acc, n]\n        done jumpi      \u002F\u002F [acc, n]\n        swap1           \u002F\u002F [n, acc]\n        0x01 swap1 sub  \u002F\u002F [n-1, acc]\n        swap1           \u002F\u002F [acc, n-1]\n        0x01 add        \u002F\u002F [acc+1, n-1]\n        loop jump\n    done:\n        swap1 pop       \u002F\u002F [acc]\n}\n```\n\nEach label compiles to a single `JUMPDEST` byte (`0x5b`). The references (`loop jump`, `done jumpi`) compile to `PUSH2 \u003Coffset> JUMP` (or `JUMPI`). This is exactly what you would write by hand in raw EVM assembly — Huff just handles the offset bookkeeping.\n\n## takes() and returns()\n\nThe `takes(n)` and `returns(m)` annotations on macros and functions are documentation and compiler hints. They tell the reader — and the Huff compiler's stack checker — how many stack items the block expects to consume and produce.\n\n```huff\n#define macro ADD_TWO() = takes(2) returns(1) {\n    add  \u002F\u002F consumes 2 items, produces 1\n}\n```\n\nIf your actual stack behavior does not match the annotation, `huffc` will emit a warning. Treat these annotations as a poor man's type system — they prevent you from accidentally leaving garbage on the stack or underflowing.\n\n## Comparison: Huff vs Solidity Bytecode\n\nConsider a simple `getValue()` view function that returns a storage slot:\n\n**Solidity:**\n```solidity\nfunction getValue() external view returns (uint256) {\n    return value;\n}\n```\n\nSolc generates ~40 bytes for the dispatcher + ABI encoding:\n```\nCALLDATASIZE → CALLDATALOAD → SHR 224 → DUP1 → PUSH4 selector\n→ EQ → PUSH2 dest → JUMPI → ... → SLOAD → PUSH1 0x20\n→ MSTORE → PUSH1 0x20 → PUSH1 0x00 → RETURN\n```\n\n**Huff equivalent:**\n```huff\n#define function getValue() view returns (uint256)\n\n#define macro GET_VALUE() = takes(0) returns(0) {\n    [VALUE_SLOT]    \u002F\u002F [slot]\n    sload           \u002F\u002F [value]\n    0x00 mstore     \u002F\u002F []  — store in memory\n    0x20 0x00 return\n}\n```\n\nThe Huff version is 12 bytes of bytecode for the body. No ABI encoding overhead, no free memory pointer, no metadata hash. When you control the caller (e.g., an MEV bot calling its own contract), you can strip everything the Solidity compiler assumes you need.\n\n## Constants and Storage Slots\n\nHuff constants are compile-time values that get inlined as PUSH instructions:\n\n```huff\n#define constant VALUE_SLOT = 0x00\n#define constant OWNER_SLOT = 0x01\n#define constant MAX_UINT = 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff\n```\n\nUsage: `[VALUE_SLOT]` pushes `0x00`, `[MAX_UINT]` pushes the full 32-byte value. Constants help readability without costing any gas — they are purely syntactic.\n\n## Includes and Project Structure\n\nReal Huff projects split code across multiple files:\n\n```huff\n\u002F\u002F src\u002FMain.huff\n#include \".\u002Futils\u002FSafeMath.huff\"\n#include \".\u002Finterfaces\u002FIERC20.huff\"\n#include \".\u002FDispatcher.huff\"\n\n#define macro MAIN() = takes(0) returns(0) {\n    DISPATCHER()\n}\n```\n\nThe include system is simple textual inclusion — no module scoping or namespaces. Name your macros carefully to avoid collisions.\n\n## When to Use Huff\n\nHuff is not a general-purpose language. Use it when:\n\n1. **Gas is the primary constraint** — MEV contracts where 100 gas determines profitability.\n2. **Bytecode size matters** — Contracts deployed by other contracts (CREATE2 factories) where smaller initcode = less deployment gas.\n3. **You need custom dispatch** — Jump tables, bit-packed selectors, or non-standard ABI encoding.\n4. **You are learning the EVM** — Nothing teaches the EVM better than writing raw opcodes.\n\nFor everything else, write Solidity and read the compiler output with `solc --asm`. You will be more productive and less error-prone.\n\n## Summary\n\nHuff gives you a direct line to EVM bytecode with just enough abstraction to stay sane. Macros inline code for zero-overhead reuse. Labels handle jump offset bookkeeping. `takes`\u002F`returns` annotations catch stack errors early. In the next article, we will dive deeper into stack management — the art of `dup`, `swap`, and keeping your mental model of the stack in sync with reality.","\u003Ch2 id=\"why-huff-exists\">Why Huff Exists\u003C\u002Fh2>\n\u003Cp>Solidity is a wonderful abstraction — until it is not. When you need a contract that fits inside 100 bytes of runtime bytecode, dispatches functions in O(1) with a packed jump table, or shaves 200 gas off a hot path that executes millions of times per day, you need something closer to the metal. That something is \u003Cstrong>Huff\u003C\u002Fstrong>.\u003C\u002Fp>\n\u003Cp>Huff is a low-level EVM assembly language with a thin macro system bolted on top. It does not have variables, types, or a compiler that optimizes behind your back. What you write is what ends up on chain — opcode for opcode.\u003C\u002Fp>\n\u003Ch2 id=\"installing-huff\">Installing Huff\u003C\u002Fh2>\n\u003Cp>The canonical compiler is \u003Ccode>huffc\u003C\u002Fcode>, written in Rust:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-bash\">curl -L get.huff.sh | bash\nhuffup\nhuffc --version\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>This installs \u003Ccode>huffc\u003C\u002Fcode> to \u003Ccode>~\u002F.huff\u002Fbin\u003C\u002Fcode>. Add it to your PATH and verify:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-bash\">$ huffc --version\nhuffc 0.3.2\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>You can also use Huff inside Foundry projects with \u003Ccode>foundry-huff\u003C\u002Fcode>, which lets you deploy \u003Ccode>.huff\u003C\u002Fcode> files the same way you deploy \u003Ccode>.sol\u003C\u002Fcode> files.\u003C\u002Fp>\n\u003Ch2 id=\"hello-world-a-minimal-contract\">Hello World: A Minimal Contract\u003C\u002Fh2>\n\u003Cp>Let us write a contract that returns the 32-byte word \u003Ccode>0x01\u003C\u002Fcode> to any call:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define macro MAIN() = takes(0) returns(0) {\n    0x01            \u002F\u002F [0x01]\n    0x00            \u002F\u002F [0x00, 0x01]\n    mstore          \u002F\u002F []          — memory[0x00..0x20] = 0x01\n    0x20            \u002F\u002F [0x20]\n    0x00            \u002F\u002F [0x00, 0x20]\n    return          \u002F\u002F halt — return memory[0x00..0x20]\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Compile:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-bash\">huffc src\u002FHelloWorld.huff -r\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>The \u003Ccode>-r\u003C\u002Fcode> flag outputs the runtime bytecode. You will see something like \u003Ccode>600160005260206000f3\u003C\u002Fcode> — 10 bytes. A Solidity contract returning \u003Ccode>1\u003C\u002Fcode> compiles to roughly 200+ bytes of runtime bytecode because solc emits a full function dispatcher, metadata hash, free memory pointer setup, and ABI encoder.\u003C\u002Fp>\n\u003Ch2 id=\"macros-vs-functions\">Macros vs Functions\u003C\u002Fh2>\n\u003Cp>Huff has two code-reuse primitives: \u003Cstrong>macros\u003C\u002Fstrong> and \u003Cstrong>functions\u003C\u002Fstrong>.\u003C\u002Fp>\n\u003Ch3>Macros (\u003Ccode>#define macro\u003C\u002Fcode>)\u003C\u002Fh3>\n\u003Cp>Macros are inlined at every call site. No JUMP overhead, no extra gas — the compiler literally copy-pastes the opcodes into the caller. This is the default and the preferred choice for gas-critical code.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define macro REQUIRE_NOT_ZERO() = takes(1) returns(0) {\n    \u002F\u002F takes: [value]\n    continue        \u002F\u002F [continue_dest, value]\n    jumpi           \u002F\u002F []  — jump if value != 0\n    0x00 0x00 revert\n    continue:\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Functions (\u003Ccode>#define fn\u003C\u002Fcode>)\u003C\u002Fh3>\n\u003Cp>Functions generate an actual JUMP\u002FJUMPDEST pair. They save bytecode size at the expense of ~22 extra gas per call (8 for JUMP + 1 for JUMPDEST + stack manipulation). Use them only when bytecode size matters more than gas.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define fn safe_add() = takes(2) returns(1) {\n    \u002F\u002F takes: [a, b]\n    dup2 dup2       \u002F\u002F [a, b, a, b]\n    add             \u002F\u002F [sum, a, b]\n    dup1            \u002F\u002F [sum, sum, a, b]\n    swap2           \u002F\u002F [a, sum, sum, b]\n    gt              \u002F\u002F [overflow?, sum, b]\n    overflow jumpi\n    swap1 pop       \u002F\u002F [sum]\n    back jump\n    overflow:\n        0x00 0x00 revert\n    back:\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"labels-and-jump-destinations\">Labels and Jump Destinations\u003C\u002Fh2>\n\u003Cp>Labels in Huff are named JUMPDEST locations. The compiler resolves them to concrete bytecode offsets at compile time.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define macro LOOP_EXAMPLE() = takes(1) returns(1) {\n    \u002F\u002F takes: [n]\n    0x00                \u002F\u002F [acc, n]\n    loop:\n        dup2            \u002F\u002F [n, acc, n]\n        iszero          \u002F\u002F [n==0?, acc, n]\n        done jumpi      \u002F\u002F [acc, n]\n        swap1           \u002F\u002F [n, acc]\n        0x01 swap1 sub  \u002F\u002F [n-1, acc]\n        swap1           \u002F\u002F [acc, n-1]\n        0x01 add        \u002F\u002F [acc+1, n-1]\n        loop jump\n    done:\n        swap1 pop       \u002F\u002F [acc]\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Each label compiles to a single \u003Ccode>JUMPDEST\u003C\u002Fcode> byte (\u003Ccode>0x5b\u003C\u002Fcode>). The references (\u003Ccode>loop jump\u003C\u002Fcode>, \u003Ccode>done jumpi\u003C\u002Fcode>) compile to \u003Ccode>PUSH2 &lt;offset&gt; JUMP\u003C\u002Fcode> (or \u003Ccode>JUMPI\u003C\u002Fcode>). This is exactly what you would write by hand in raw EVM assembly — Huff just handles the offset bookkeeping.\u003C\u002Fp>\n\u003Ch2 id=\"takes-and-returns\">takes() and returns()\u003C\u002Fh2>\n\u003Cp>The \u003Ccode>takes(n)\u003C\u002Fcode> and \u003Ccode>returns(m)\u003C\u002Fcode> annotations on macros and functions are documentation and compiler hints. They tell the reader — and the Huff compiler’s stack checker — how many stack items the block expects to consume and produce.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define macro ADD_TWO() = takes(2) returns(1) {\n    add  \u002F\u002F consumes 2 items, produces 1\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>If your actual stack behavior does not match the annotation, \u003Ccode>huffc\u003C\u002Fcode> will emit a warning. Treat these annotations as a poor man’s type system — they prevent you from accidentally leaving garbage on the stack or underflowing.\u003C\u002Fp>\n\u003Ch2 id=\"comparison-huff-vs-solidity-bytecode\">Comparison: Huff vs Solidity Bytecode\u003C\u002Fh2>\n\u003Cp>Consider a simple \u003Ccode>getValue()\u003C\u002Fcode> view function that returns a storage slot:\u003C\u002Fp>\n\u003Cp>\u003Cstrong>Solidity:\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-solidity\">function getValue() external view returns (uint256) {\n    return value;\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Solc generates ~40 bytes for the dispatcher + ABI encoding:\u003C\u002Fp>\n\u003Cpre>\u003Ccode>CALLDATASIZE → CALLDATALOAD → SHR 224 → DUP1 → PUSH4 selector\n→ EQ → PUSH2 dest → JUMPI → ... → SLOAD → PUSH1 0x20\n→ MSTORE → PUSH1 0x20 → PUSH1 0x00 → RETURN\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>\u003Cstrong>Huff equivalent:\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define function getValue() view returns (uint256)\n\n#define macro GET_VALUE() = takes(0) returns(0) {\n    [VALUE_SLOT]    \u002F\u002F [slot]\n    sload           \u002F\u002F [value]\n    0x00 mstore     \u002F\u002F []  — store in memory\n    0x20 0x00 return\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>The Huff version is 12 bytes of bytecode for the body. No ABI encoding overhead, no free memory pointer, no metadata hash. When you control the caller (e.g., an MEV bot calling its own contract), you can strip everything the Solidity compiler assumes you need.\u003C\u002Fp>\n\u003Ch2 id=\"constants-and-storage-slots\">Constants and Storage Slots\u003C\u002Fh2>\n\u003Cp>Huff constants are compile-time values that get inlined as PUSH instructions:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">#define constant VALUE_SLOT = 0x00\n#define constant OWNER_SLOT = 0x01\n#define constant MAX_UINT = 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Usage: \u003Ccode>[VALUE_SLOT]\u003C\u002Fcode> pushes \u003Ccode>0x00\u003C\u002Fcode>, \u003Ccode>[MAX_UINT]\u003C\u002Fcode> pushes the full 32-byte value. Constants help readability without costing any gas — they are purely syntactic.\u003C\u002Fp>\n\u003Ch2 id=\"includes-and-project-structure\">Includes and Project Structure\u003C\u002Fh2>\n\u003Cp>Real Huff projects split code across multiple files:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-huff\">\u002F\u002F src\u002FMain.huff\n#include \".\u002Futils\u002FSafeMath.huff\"\n#include \".\u002Finterfaces\u002FIERC20.huff\"\n#include \".\u002FDispatcher.huff\"\n\n#define macro MAIN() = takes(0) returns(0) {\n    DISPATCHER()\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>The include system is simple textual inclusion — no module scoping or namespaces. Name your macros carefully to avoid collisions.\u003C\u002Fp>\n\u003Ch2 id=\"when-to-use-huff\">When to Use Huff\u003C\u002Fh2>\n\u003Cp>Huff is not a general-purpose language. Use it when:\u003C\u002Fp>\n\u003Col>\n\u003Cli>\u003Cstrong>Gas is the primary constraint\u003C\u002Fstrong> — MEV contracts where 100 gas determines profitability.\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Bytecode size matters\u003C\u002Fstrong> — Contracts deployed by other contracts (CREATE2 factories) where smaller initcode = less deployment gas.\u003C\u002Fli>\n\u003Cli>\u003Cstrong>You need custom dispatch\u003C\u002Fstrong> — Jump tables, bit-packed selectors, or non-standard ABI encoding.\u003C\u002Fli>\n\u003Cli>\u003Cstrong>You are learning the EVM\u003C\u002Fstrong> — Nothing teaches the EVM better than writing raw opcodes.\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Cp>For everything else, write Solidity and read the compiler output with \u003Ccode>solc --asm\u003C\u002Fcode>. You will be more productive and less error-prone.\u003C\u002Fp>\n\u003Ch2 id=\"summary\">Summary\u003C\u002Fh2>\n\u003Cp>Huff gives you a direct line to EVM bytecode with just enough abstraction to stay sane. Macros inline code for zero-overhead reuse. Labels handle jump offset bookkeeping. \u003Ccode>takes\u003C\u002Fcode>\u002F\u003Ccode>returns\u003C\u002Fcode> annotations catch stack errors early. In the next article, we will dive deeper into stack management — the art of \u003Ccode>dup\u003C\u002Fcode>, \u003Ccode>swap\u003C\u002Fcode>, and keeping your mental model of the stack in sync with reality.\u003C\u002Fp>\n","en","b0000000-0000-0000-0000-000000000001",true,"2026-03-28T10:44:22.869903Z","Introduction to Huff, the low-level EVM assembly language. Learn macros, labels, takes\u002Freturns, and how Huff bytecode compares to Solidity output.","huff language evm",null,"index, follow",[21,26,30],{"id":22,"name":23,"slug":24,"created_at":25},"c0000000-0000-0000-0000-000000000016","EVM","evm","2026-03-28T10:44:21.513630Z",{"id":27,"name":28,"slug":29,"created_at":25},"c0000000-0000-0000-0000-000000000020","Gas Optimization","gas-optimization",{"id":31,"name":32,"slug":33,"created_at":25},"c0000000-0000-0000-0000-000000000017","Huff","huff","Blockchain",[36,42,48],{"id":37,"title":38,"slug":39,"excerpt":40,"locale":12,"category_name":34,"published_at":41},"de000000-0000-0000-0000-000000000003","The Ethereum Interoperability Layer: How 55+ L2s Become One Chain","ethereum-interoperability-layer-how-55-l2s-become-one-chain","Ethereum has 55+ Layer 2 rollups, fragmenting liquidity and user experience. The Ethereum Interoperability Layer — combining cross-rollup messaging, shared sequencers, and based rollups — aims to unify them into a single composable network.","2026-03-28T10:44:35.632478Z",{"id":43,"title":44,"slug":45,"excerpt":46,"locale":12,"category_name":34,"published_at":47},"de000000-0000-0000-0000-000000000002","ZK Proofs Beyond Rollups: Verifiable AI Inference on Ethereum","zk-proofs-beyond-rollups-verifiable-ai-inference-ethereum","Zero-knowledge proofs are no longer just a scaling tool. In 2026, zkML enables verifiable AI inference on-chain, ZK coprocessors move heavy computation off-chain with on-chain verification, and new proving systems like SP1 and Jolt make it practical.","2026-03-28T10:44:35.618408Z",{"id":49,"title":50,"slug":51,"excerpt":52,"locale":12,"category_name":34,"published_at":53},"dd000000-0000-0000-0000-000000000003","EIP-7702 in Practice: Building Smart Account Flows After Pectra","eip-7702-in-practice-building-smart-account-flows-after-pectra","EIP-7702 lets any Ethereum EOA temporarily act as a smart contract within a single transaction. Here is how to implement batch transactions, gas sponsorship, and social recovery using the new account abstraction primitive.","2026-03-28T10:44:35.031290Z",{"id":13,"name":55,"slug":56,"bio":57,"photo_url":18,"linkedin":18,"role":58,"created_at":59,"updated_at":59},"Open Soft Team","open-soft-team","The engineering team at Open Soft, building premium software solutions from Bali, Indonesia.","Engineering Team","2026-03-28T08:31:22.226811Z"]