Transaction parameters

Presets

You can check standard presets using presets command:

./acapella  presets --name transactional

->

{
    "allowConvertAsyncToSync": false,
    "allowConvertSyncToAsync": false,
    "allowRestart": true,
    "allowSubFragments": true,
    "arguments": {},
    "beginKvTransaction": false,
    "failover": false,
    "fragment": "TEST_USER/Unnamed/default:main.lua",
    "logging": {
        "allowCreateLogs": false,
        "redirections": {
            "stderr": {
                "id": "log",
                "ordering": "PARTIAL",
                "scope": "TRANSACTION"
            },
            "stdout": {
                "id": "log",
                "ordering": "PARTIAL",
                "scope": "TRANSACTION"
            }
        }
    },
    "resolveConflicts": false,
    "syncTvmIo": true,
    "tvmCount": 3
}

fragment

Full reference to the fragment. Full reference must include snapshot name and owner. Snapshot tag is optional.

Format: <UserId>/<SnapshotName>[/<SnapshotTag>]:<FragmentPath>. Examples:

  • Alex/acapella/stage-1.2:main.py - call fragment located in top level directory of shared snapshot acapella (owned by user Alex) with tag stage-1.2
  • Alex/acapella:fragments/test/main.py - will be used snapshot tag default

The same format may be used for subfragment invocations (ap.call, ap.call_async, ap.call_and_await):

ap.call("Alex/acapella/stage-1.2:main.py")

Also, inside fragments can be used "relative" fragment reference format, which does't require a snapshot reference:

ap.call("main.py") # will be used the same snapshot in which current fragment is located

arguments

Dictionary of string keys and values. This dictionary will be available in the arguments of the root fragment (ap.args). You can pass root fragment arguments in preset, but also can redefine them in command line:

acapella run --args '{"p1":"v1", "p2":"v2"}' fragment.py

resolveConflicts

If true, then CPVM will detect conflicting data updates (conflicting fragments) and optimize execution order to prevent them. This can reduce concurrency, which in turn can reduce throughput.

If false, then CPVM will leave the order of execution intact. If your fragments are using TVM to work with a data, then transaction can end with a conflict.

allowRestart

false value of this parameter is incompatible with resolveConflicts: true.

If true, then CPVM can restart any fragment unlimited number of times.

Some fragments can make permanent effects on the outside world, so they can not be restarted. In such case, it is recommended to set allowRestart: false in the specific fragment metadata, instead of disable concurrency and failover for the entire transaction.

TODO править язык: Its a great idea - not only detect control flow conflicts, but fix it at the runtime - just restart part of transaction, is a lighter than restart all transaction

allowSubFragments

If false, then CPVM will disable functions ap.call, ap.call_async, ap.call_and_await in all fragments of the transaction. This can be usefull to work with CPVM in FaaS manner.

failover

TODO язык: CPVM - distributed VM can give to user execution guarantees. If come internal nodes and workers can not execute "fragments" - this work gived to other nodes and workers.

If true, then CPVM gives you guarantee that your transaction will be executed eventually, regardless of the individual node failures. In other words: true value of this parameter gives you "at least once" semantics. false value of this parameter gives you "at most once" semantics.

true value of this parameter is incompatible with allowRestart: false.

beginKvTransaction

beginKvTransaction = true | false - get transaction ID from KV AcapellaDB or not.

If you have data stored in AcapellaDB DB and you want to handle it as single transaction with auto parallelism using CPVM - this flag must be true.

In such case CPVM does not generate transaction ID by yourself, but instead delegate that to AcapellaDB (in other words, CPVM trid and transactionId in AcapellaDB will be the same).

TODO язык:

    CPVM split all calls of fragments in this transacction to subtransactions, will be to make auto parallelizm - restart some fragments, 
    and can rollback some IO of fragments automaticaly in AcapellaDB, because transaction the same and AcapellaDB are transactional storage can rollback operations.

    It is a main way of usage this type of autoparallelizm with IO to external world (external relatively VM [CPVM] )

logging \ allowCreateLogs

about logging - read this first

Logging system in cpvm very useful for developers. fragment have logging api and can just log some :

TODO 

But cpvm is a parallel execution system - logs must be sequentionaly for undestanding it.

Internaly cpvm use subtransactions or special transaction markers and can fully sort events of log, but some time nedded realtime logging.

User can manualy set requered level of sorting of log parts, get log events immediatly or at the end of transaction, with or without info from restarted fragments.

User get info by fragments, transactions, execution, user, or any user topic.

logging \ redirections

TODO

tvmCount

count of tvm nodes used in this thansaction

TVM - special storage system used in cpvm like shared area between fragments.

User must to store in tvm all the data on which depends control flow of transaction.

Tvm can detect not correct sequence of access to data from async runned fragments using dam (data access markers) of subtransactions.

So TVM - is a storage. if transaction handle many data - you need many nodes of tvm.

  • much data - more tvmCount
  • need more parallel access to data in tvm - more tvmCount (access to tvm are sync)
  • your data split into many pieces - more tvm nodes evenly distribute all of it

dockerImage

example :

"dockerImage": "jfloff/alpine-python"