This chapter describes the Python instrumentation API from the module `palanteer`.
## Initialization
### Automatic instrumentation without code modification
It is possible to use automatic instrumentation of unmodified python code, exactly as it is done for the `cProfile` module.
The syntax is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
python -m palanteer [options] <Python program>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It automatically instruments:
- the function entering/leaving (Python and C)
- the raised exceptions
- the garbage collector activations
- the memory allocations (OS calls)
It does not instrument:
- the locks
- the values of variables
The full usage is obtained with `python -m palanteer`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ none
> python3 -m palanteer
Palanteer profiler usage:
Either:
1) With code instrumentation: insert a call to palanteer.plInitAndStart(app_name, ...) in your main function.
See Palanteer documentation for details.
Manual instrumentation can provide much more information (data, locks, ...) than just the automatic function profiling.
2) With unmodified script: 'python -m palanteer [options] <your script>'
This syntax is similar to the cProfile usage. No script modification is required but only the function timings and exceptions are profiled.
Options for case (2):
-s <server IP address> Set the server IP address (default is 127.0.0.1)
-p <server TCP port> Set the server port (default is 59059)
-f <filename.pltraw> Save the profiling data into a file to be imported in the Palanteer viewer (default=remote connection)
-nf Do not automatically trace the functions (default=trace functions)
-ne Do not automatically trace the exceptions (default=trace exception)
-nm Do not trace the memory allocations (default=trace memory allocations)
-ng Do not automatically trace the garbage collector runs (default=trace gc)
-c Do automatically trace the C functions (default=only python functions)
-w Wait for server connection (Palanteer viewer or scripting module). Applicable only in case of remote connection.
-m Run the app as a module (similar to python's "-m" option)
Note 1: in case of connection to the server and -w is not used and no server is reachable, profiling is simply skipped
Note 2: on both Windows and Linux, context switch information is available only with root privileges (OS limitation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### plInitAndStart
This function initializes and starts the Python instrumentation service.
Only the name of the application is mandatory. It allows the viewer to group the records for the same application together.
By default, the service is in the remote connection mode. If the `record_filename` is not empty, it switches to the
file storage mode (no remote connection) with this file name.
The expected extension is `.pltraw`, it is added if not present.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
def plInitAndStart(app_name, # Name of the program (mandatory)
record_filename = None, # Name of the record file. If not empty, the mode is "storing on file". Default is remote connection
build_name = None, # Build name (optional). Visible in the viewer as a nickname.
server_address = "127.0.0.1", # Server IP address (server=viewer or scripting module)
server_port = 59059, # Server port
do_wait_for_server_connection = False, # If True, this call blocks until a remote connection is established
with_functions = True, # If True, function entering and leaving are automatically instrumented (Python and C)
with_exceptions = True, # If True, exception are automatically instrumented (as a log)
with_memory = True, # If True, the memory allocations/deallocations are traced
with_gc = True, # If True, garbage collection is automatically instrumented (as a lock)
with_c_calls = False # If True, the C calls are traced. Set it to False if builtins calls are too verbose
)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of call:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plInitAndStart("Tetris", do_wait_for_server_connection=True)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In case of monitoring a program in production, it is recommended to:
- disable the automatic function instrumentation `with_functions=False` in order to remove most of the tracing performance impact
- manually instrument the important scopes and data
- use scripting to implement or pipe into your monitoring tool
Almost all memory allocations of the `Palanteer` services are done in this function. For details, refer to the section [memory usage](index.html#c++memoryusage).
!!!
`plInitAndStart` is supposed to be called at most once per program execution, next calls are ignored. It may be called again after `plStopAndUninit`.
Once started, the event collection is always running until the service is stopped.
All resources are freed when calling the associated function `plStopAndUninit` below.
!!! warning Important
The `with_c_calls` variable controls the automatic tracing of functions implemented in C, which includes many intensively used builtin ones like `print`, list `append` etc...
If not really useful, it may be worth considering keeping it off, to reduce the resource usage (CPU, network and record size).
### plStopAndUninit
This function stops and uninitializes the Python instrumentation service. It is typically called before exiting the program.
- It flushes to the server the last collected events, up to this call
- It properly stops the `Palanteer` threads
- It cleans the resources (i.e. threads and memory)
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
def plStopAndUninit()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!!
If not called explicitly before exiting the program (case of a crash, or simply not called at all), the service stopping is performed implicitly, unless the process is hard-killed.
This behavior ensures that all last events are properly recorded, which is critical in case of crash.
### plIsActivated
This function returns the boolean state of the `Palanteer` service, `True` if activated and `False` if not.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
def plIsActivated()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Structure tracing
### plDeclareThread
A thread can be given a name, at any moment during the recording, so that:
- it is easier to recognize
- it has a persistent identifier which makes scripting more reliable
The default name is "`Thread N`" with `N` the integer thread order of appearance (so may differ from run to run).
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
def plDeclareThread(threadName)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some example of usage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plDeclareThread("Worker 1")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Threads can be grouped by prefixing the thread name with a group name, separated with "/" (similar to UNIX pathname).
Only one hierarchical level is accepted.
Some example of thread grouping:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Important: Calls here are purely for explanation. plDeclareThread shall of course be called in its respective thread
# Standalone thread, not in a group
# (Called in a thread)
plDeclareThread("Render")
# The calls below implicitly define a group "Workers" which contains 3 threads
# (Called at the start of another thread)
plDeclareThread("Workers/Worker 1")
# (Called at the start of yet another thread)
plDeclareThread("Workers/Worker 2")
# (Called at the start of yet another thread)
plDeclareThread("Workers/Worker 3")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### plBegin and plEnd
These two functions are used together, they define an explicit start and end of a scope. A scope is a timestamped chunk of a hierarchical level.
The provided name of the scope shall be the same for both calls, so that any mismatch can be detected on the server side.
!!! warning
This API shall be used with caution.
For instance, missing a `plEnd` call, typically in a multiple exit function, leads to a broken hierarchical data tree structure.
Also, interlaced scoped leads to mismatching "begin" and "end".
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Declares the start of a scope with the provided name
plBegin(name)
# Declares the end of a scope with the provided name
# The name shall match the begin name, so mismatchs can be detected on viewer side and lead to warnings
plEnd(name)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some Pythonic structures are derived from these functions:
**1) `plFunction` is a decorator which instruments the entering and leaving of the function**
- This is a safe way to manually instrument a selection of functions
- If automatic function instrumentation is enabled, it turns into a no-operation
Example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
@plFunction
def fibonacci(root_number):
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**2) `plScope` is a context manager, to use with the `with` keyword**
- This is a safe way to manually instrument a part of a function
- It is best used when automatic function instrumentation is disabled
- Indeed, in this case the calls to the 3 APIs of the context manager (`__init__`, `__enter__` and `__leave__`) are visible
Example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
with plScope("Temporary"):
...
(continue the program)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Data tracing
The instrumentation in this chapter traces string or data of any numeric type inside the current scope.
!!! warning Important
Such events **must** be located inside a scope, never at the tree root.
Indeed, they carry only the logged data and are not timestamped.
Such events can be visualized in the viewer under several forms: text, curve or histogram.
They can optionally be associated to an `unit` by ending the event name with `##`:
* The unit is stripped from the name, it is not displayed
* The unit increases the semantic of the events and make is more useful for the users
* Only events with consistent units can be plotted together.
* "hexa" is a special and hardcoded unit which displays integer in hexadecimal
Ex: `"Duration##nanosecond"`, `"pointer##hexa"`, `"Banana##fruit"`
!!! note Reminder:
As only string hash are considered, the unit declaration is part of the unique identifier for an event.
### plData
This function traces a named numerical value.
The type of the value can be any integral types plus float, double and string (processed as dynamic string).
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Traces a named string or numerical value
plData(name, value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of usage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python+
a = 15
c = "I am a string"
plData("my value A", a)
plData("Truncated PI", 3.14159265359)
plData("Dynamic string", c)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Logs
Logs are timestamped messages associated to a category and a level.
For instance, Python exceptions are logged with `plLogWarn` on the category "Exception".
In the viewer:
- a dedicated log window provides filtering capabilities on threads, levels and categories
- an indicator is visible for each log on top of the timeline for the associated thread (configurable level)
Four log levels are defined, in this order:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
LOG_LEVEL_ALL = 0 # Just for code clarity
LOG_LEVEL_DEBUG = 0
LOG_LEVEL_INFO = 1
LOG_LEVEL_WARN = 2
LOG_LEVEL_ERROR = 3
LOG_LEVEL_NONE = 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### plSetLogLevelRecord
This function dynamically sets the minimum log level to be recorded. All logs with a level strictly below the provided level are dynamically filtered out from the record.
The default value is `LOG_LEVEL_ALL`.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plSetLogLevelRecord(level)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A level of `LOG_LEVEL_ALL` (equivalent to `LOG_LEVEL_DEBUG`) includes all logs in the record.
A level of `LOG_LEVEL_NONE` removes all logs from the record.
!!! warning
This function does not force any recording, it just dynamically configures the log filtering on the program side.
### plSetLogLevelConsole
This function dynamically sets the minimum log level to be displayed on console. All logs with a level strictly below the provided level are dynamically filtered out from the console display.
The default value is `LOG_LEVEL_WARN`.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plSetLogLevelConsole(level)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A level of `LOG_LEVEL_ALL` (equivalent to `LOG_LEVEL_DEBUG`) displays all logs on the console.
A level of `LOG_LEVEL_NONE` does not display any log on the console.
!!!
The console display is active even if the Palanteer service is not initialized nor enabled. Its configuration is fully independent of the log recording.
!!! warning
The typical usage of the console display is to troubleshoot easily a problem.
As its run-time performance is poor due to the extra formatting and console output, it is not recommended to activate it in a context of performance observation.
### plLogDebug
This function traces a log in the form of a timestamped message associated to a category and a `debug` level.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Add a timestamped log with a string message
plLogDebug(category, msg)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of usage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# String message
plLogDebug("error", "Very weird but recoverable case occured")
# Another message
plLogDebug("input", "Key '%c' pressed" % c)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!! warning Limitation
Unlike the C++ version, no format string argument are accepted so no parameter can be graphed.
This may evolve in the future
### plLogInfo
This function is similar to the [`plLogDebug`](#pllogdebug) function but logs with the level `info`.
### plLogWarn
This function is similar to the [`plLogDebug`](#pllogdebug) function but logs with the level `warn`.
### plLogError
This function is similar to the [`plLogDebug`](#pllogdebug) function but logs with the level `error`.
### plMarker (deprecated)
Markers are superseded by the previous `log` APIs, which are a kind of generalization. The `plMarker` APIs will be removed in a future release.
At the moment, they are mapped to a call to `plLogWarn`.
## Lock tracing
The instrumentation in this chapter traces actions performed on locks (as a general term).
The usage of locks in multithreaded environment can deeply modify the dynamic behavior.
Visualizing them becomes critical in complex programs, that is why a specific diagram in the viewer helps identifying the bottlenecks across threads.
The lock API is "low level" so that it adapts to many existing lock primitives (mutex, semaphores, Event...), at the price of some less automatic instrumentation work.
!!!
The locking process has two different phases:
- waiting for the lock: the wait for lock is started, then ended with a positif or negative outcome
- using the lock: the lock is first taken, then released.
### plLockWait
The locking process usually starts with waiting for the lock.
This function traces this first step.
When the wait ends, `plLockState(...)` or `plLockScopeState(...)` **must** be called to set both:
- the end of the waiting phase
- the state of the lock: `true` if the lock is taken, or `false` if not.
!!!
This "lock wait" information is crucial in a multithreaded program so it should be instrumented thoroughly.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Set the start of waiting for a lock
plLockWait(lock_name)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of usage for a pthread mutex lock:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plLockWait("Database access") # Start waiting for the lock named "Database access"
database_lock.acquire()
plLockState("Database access", True); # >>> Do not forget this call! <<< Lock is taken (see next section)
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### plLockState
This function is involved in both phases of the locking process.
Its main role is to set the state of the lock usage:
- `True` state means the lock is taken by the thread
- `False` state means the lock is released (or not used) by the thread
Its other and implicit role is to **mark the end of the waiting process** (if it was started previously), whatever the lock state value.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Set the lock state
plLockState(lock_name, lockUseState);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!! tip
`plLockState` **must** be called just after the end of the waiting phase, if any, to mark its end
!!! tip
`plLockState` shall be placed just **before** "unlock" call, so that there is no race condition on the tracing which inverse events chronology.
Example of usage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
...
# Lock is released
plLockState("Database access", False); # Lock is released (trace before the release call)
database_lock.release()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### plLockNotify
This function traces a notification or a "post" on a lock.
It has no direct effect on the lock state.
It typically matches the behavior of a `threading.Event.set()` call, or a semaphore post.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Trace a lock notification
plLockNotify(lock_name)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!! tip
`plLockNotify` shall be placed just **before** "notification" call, so that there is no race condition on the tracing which inverse events chronology.
Example of usage:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
wait_event = threading.Event()
...
if shall_wake_the_task:
plLockNotify("Task wait") # Notify the "Task Wait" lock (linked to a condition variable)
wait_event.set(1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## CLIs
"CLIs" (Command Line Interface) are functions inside the instrumented program that can be called remotely and with parameters.
### plRegisterCli
This function registers a CLI.
It requires a [CLI handler](#clihandler), the name of the CLI, the [CLI parameter specification](base_concepts.md.html#cliparameterspecification) and a user description.
The name of the command shall not contain spaces. The user description is purely for documentation purpose.
!!!
CLIs can be registered before the instrumentation service is initialized.
This is even recommended in order to remove any potential race condition about calling a not yet registered CLI.
The declaration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
plRegisterCli(cli_handler, # A function with a prototype compatible with the parameter specification
name, # The name of the command. No space allowed
param_specification, # The specification string of the parameters
description # The description of the command, for users
)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An example of CLI registration is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
def set_bounds_cli_handler(minValue, maxValue):
...
plRegisterCli(set_bounds_cli_handler, "config:setRange", "min=int max=int", "Sets the value bounds of the random generator");
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### CLI handler
A CLI handler is a function triggered remotely that interacts with the controlled program.
It accepts typed parameters among integer, float or string and returns a status and a text.
The prototype of the CLI handler matches the declared parameter specification when registered.
The returned values shall be a tuple (status, text_answer):
- status is 0 for success, 1 for error
- text_answer is a string. In case of error status, it is advised to use text_answer to provide an explanation.
An example of CLI handler named `config:setRange` declared with the parameter spec "`min=int max=int`" is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
# Handler (=implementation) of the example CLI
def set_bounds_cliHandler(minValue, maxValue):
global globalMinValue, globalMaxValue
if minValue>maxValue:
# Not allowed case, the CLI execution has failed. The text answer contains some information about it
return 1, "Minimum value (%d) shall be lower than the maximum value (%d)" % (minValue, maxValue)
# Modify the state of the program
globalMinValue = minValue
globalMaxValue = maxValue
# CLI execution was successful. Text is empty as our CLI does not generate any output
return 0, ""
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
And a way to call it from a Python script with the `Palanteer` scripting module is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
palanteer_scripting.program_cli("config:setRange min=300 max=500")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Virtual threads
A "virtual thread" is a thread that is not managed by the OS. Some examples are the usage of asyncio or gevent.
As they do not require a kernel call but just a stack+register save/restore, using them can be faster than using OS threads
when many jobs need to wait on I/O (ex: sockets).
In the Python instrumentation, this support is automatic:
- the worker threads are automatically named `Workers/Async worker` followed by a worker number if more than 1
- the coroutines are automatically named after the top function name
- A number is added if more than 1 coroutine has the same name
- each worker has a dedicated resource line which provides an overview of the worker usage
- Note: the worker OS thread contains also scopes showing the running coroutines, in addition of the worker resource
!!!
In the viewer, a suspended coroutine is visually represented by a SOFTIRQ with the name "Suspended"
Some functions are automatically filtered out to remove some bloat from supposedly transparent mechanisms.
The exaustive list is:
- the module `palanteer._cextension` from Palanteer, which is supposed to be transparent
- the module `_UnixSelectorEventLoop`, which is the asyncio glue on Linux
- the module `ProactorEventLoop`, which is the asyncio glue on Windows
- the function `_plFunctionInner` from Palanteer, which is the transparent mechanism for function decorator
- the function `_pl_garbage_collector_notif` from Palanteer, which is the transparent mechanism for garbage collector tracking
- the function `Thread.run`, which is the top method for each thread and does not carry any useful information
- the function and sub-functions of `_find_and_load`, which is a transparent part of the launch of the CPython interpreter
- the function and sub-functions of `Thread._bootstrap`, which is a "leave" of a Python thread, as the "enter" is not caught by Palanteer due to the thread bootstrap mechanism
- the function and sub-functions of `TimerHandle.cancel`, which is a transparent part of the coroutine mechanism
- the function and sub-functions of `_cancel_all_tasks`, which is a transparent part of the coroutine mechanism
!!!
The Python test program uses asyncio to demonstrate the result of using virtual threads
## Troubleshootings
**`make install` gives the error "invalid command 'bdist_wheel'" **
This missing command means "Binary DISTtribution in wheel format".
The `wheel` package is missing, which is fixed with:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ none
pip install wheel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**CMake installs the Python modules locally, just for my user. I want it to be for all users**
The installation of the Python modules by `cmake` is done with
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ none
pip install <wheel package.whl>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the user has the privileges to do so (i.e. root or administrator), `pip` installs the package for all users. Else locally.
The built wheel package location is displayed in the `make` output.
**Automatic tracing really slows down my program, how can I improve this?**
There is at least 2 things to try:
1. Maybe the automatic instrumentation of C functions is enabled (`with_c_calls=True` in the call to `plInitAndStart`, or usage of the `-c` option in the command line).
As they are widely used in builtins (`print`, `list.append`...), they may lead to heavy tracing.
If this information is not critical, disabling the C functions instrumentation can help.
1. To get a full control and very low overhead, typically for long run monitoring in production, just disable the automatic tracing of functions with `with_functions=False` in the call to `plInitAndStart`.
Then manually instrument your program on the interesting locations with `plBegin`, `plEnd` and `@plFunction` decorator.
**I manually instrumented my program, how can I share it without forcing users to install Palanteer?**
For this particular purpose, the small module `./tools/palanteer_stub.py` implements all APIs as a no-operation.
All Palanteer module import shall be done this way with the `palanteer_stub.py` file accessible:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
try:
import palanteer
except ModuleNotFoundError:
import palanteer_stub as palanteer # Fallback with empty functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
or
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python
try:
from palanteer import *
except ModuleNotFoundError:
from palanteer_stub import * # Fallback with empty functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If `Palanteer` is not installed, this imports empty functions as a fallback.