Onboarding a new validator to a pruned network

Hi, recently, I’ve been trying to create my own network with pruning enabled, generate txs, and onboard a new validator. Regardless of the number of transactions generated and the size of the prune window, I was getting errors like:

[stream-serv-9] state-sync/state-sync-v2/data-streaming-service/src/streaming_service.rs:110 {"error":{"DataIsUnavailable":"Unable to satisfy stream engine: ContinuousTransactionStreamEngine(ContinuousTransactionStreamEngine { request: ContinuouslyStreamTransactions(ContinuouslyStreamTransactionsRequest { known_version: 0, known_epoch: 1, include_events: false, target: Some(V0(LedgerInfoWithV0 { ledger_info: LedgerInfo { commit_info: BlockInfo { epoch: 38, round: 93, id: HashValue(937685aadada3b827aac33570c260807821b570f230dab2339e3ea2cc7825b2f), executed_state_id: HashValue(82603a6dbb771644639764f255dd904f3f1427e534bab28bb0f294d3ff9665ae), version: 6646, timestamp_usecs: 1668437949978307, next_epoch_state: Some(EpochState [epoch: 39, validator: ValidatorSet: [645b8486d2402dc86a6aef7da811433bf80d6a27bfa9429302800c4cbe10012b: 1, 38251403f1ad3fb9c4e1c32e081885986f465d9938844c01a7d3a0b18f7d32af: 1, 39b14ab7a6026579ecba2a7fa1cf300db0cb74b0e893731ba691f53b29a6f557: 1, ]]) }, consensus_data_hash: HashValue(e900a7cae6e064999a84511e09606cd50a5d6b7347b59309c8b65a695a2eae7b) }, signatures: AggregateSignature { validator_bitmask: BitVec { inner: [192] }, sig: Some(86f293ebdf14b64b114a988f64b643e63a506b59937cf3861a2d2244f31d8e010880aeee2dfa34c9f4ae0cad4cf4a9bd135c4f0fadd7381c7296356de03648eed6fb7de4cc49f1701f07bec452ebdea26875e444ec8835a3d95893c8efe9a160) } })) }), current_target_ledger_info: None, end_of_epoch_requested: false, subscription_requested: false, next_stream_version_and_epoch: (1, 1), next_request_version_and_epoch: (1, 1), stream_is_complete: false }), with advertised data: epoch_ending_ledger_infos: [CompleteDataRange { lowest: 0, highest: 38 }, CompleteDataRange { lowest: 0, highest: 38 }], states: [CompleteDataRange { lowest: 5647, highest: 6646 }, CompleteDataRange { lowest: 5647, highest: 6646 }], synced_ledger_infos: [(Version: 6646, Epoch: 38, Ends epoch: true), (Version: 6646, Epoch: 38, Ends epoch: true)], transactions: [CompleteDataRange { lowest: 5645, highest: 6646 }, CompleteDataRange { lowest: 5645, highest: 6646 }], transaction_outputs: [CompleteDataRange { lowest: 5645, highest: 6646 }, CompleteDataRange { lowest: 5645, highest: 6646 }]"},"event":"error","name":"handle_stream_request"}

or

[leader reputation] Fail to refresh window {"error":"First requested event is probably pruned. expected: 3269, actual: 3338"}

The above logs are for 100k prune window and 100k+ generated transactions but the same happens for 1 million window and transactions.

My pruning and state sync settings are as follows:

storage_pruner_config:
   ledger_pruner_config:
     enable: true
     prune_window: 1000000
     batch_size: 500
     user_pruning_window_offset: 50000
   state_merkle_pruner_config:
     enable: true
     prune_window: 1000000
     batch_size: 500
   epoch_snapshot_pruner_config:
     enable: true
     prune_window: 1000000
     batch_size: 500
state_sync_driver:
     bootstrapping_mode: DownloadLatestStates
     continuous_syncing_mode: ApplyTransactionOutputs

I saw that the pruning already kicked in on the testnet, so it seems supported. Do you have an idea what might be wrong/misconfigured? Thanks!

8 Likes

Hey, you are welcome aboard the Aptos flight. We are so happy to have you with us Sit tight, fasten your seat belt and adhere to the rules, cause we are going on Aptos ride. To the future :clinking_glasses: