aboutsummaryrefslogtreecommitdiff
path: root/pw_log_tokenized/docs.rst
diff options
context:
space:
mode:
Diffstat (limited to 'pw_log_tokenized/docs.rst')
-rw-r--r--pw_log_tokenized/docs.rst20
1 files changed, 17 insertions, 3 deletions
diff --git a/pw_log_tokenized/docs.rst b/pw_log_tokenized/docs.rst
index 24a6d8d4a..0e84c5909 100644
--- a/pw_log_tokenized/docs.rst
+++ b/pw_log_tokenized/docs.rst
@@ -4,7 +4,8 @@
pw_log_tokenized
----------------
The ``pw_log_tokenized`` module contains utilities for tokenized logging. It
-connects ``pw_log`` to ``pw_tokenizer``.
+connects ``pw_log`` to ``pw_tokenizer`` and supports
+:ref:`module-pw_log-tokenized-args`.
C++ backend
===========
@@ -55,7 +56,7 @@ letter.
.. code-block::
- "■key1♦contents1■key2♦contents2■key3♦contents3"
+ "■key1♦contents1■key2♦contents2■key3♦contents3"
This format makes the message easily machine parseable and human readable. It is
extremely unlikely to conflict with log message contents due to the characters
@@ -67,7 +68,7 @@ Implementations may add other fields, but they will be ignored by the
.. code-block::
- "■msg♦Hyperdrive %d set to %f■module♦engine■file♦propulsion/hyper.cc"
+ "■msg♦Hyperdrive %d set to %f■module♦engine■file♦propulsion/hyper.cc"
Using key-value pairs allows placing the fields in any order.
``pw_log_tokenized`` places the message first. This is prefered when tokenizing
@@ -175,6 +176,11 @@ object:
token_buffer.size());
}
+The binary tokenized message may be encoded in the :ref:`prefixed Base64 format
+<module-pw_tokenizer-base64-format>` with the following function:
+
+.. doxygenfunction:: PrefixedBase64Encode(span<const std::byte>)
+
Build targets
-------------
The GN build for ``pw_log_tokenized`` has two targets: ``pw_log_tokenized`` and
@@ -184,6 +190,14 @@ implements the backend for the ``pw_log`` facade. ``pw_log_tokenized`` invokes
the ``pw_log_tokenized:handler`` facade, which must be implemented by the user
of ``pw_log_tokenized``.
+GCC has a bug resulting in section attributes of templated functions being
+ignored. This in turn means that log tokenization cannot work for templated
+functions, because the token database entries are lost at build time.
+For more information see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70435.
+If you are using GCC, the ``gcc_partially_tokenized`` target can be used as a
+backend for the ``pw_log`` facade instead which tokenizes as much as possible
+and uses the ``pw_log_string:handler`` for the rest using string logging.
+
Python package
==============
``pw_log_tokenized`` includes a Python package for decoding tokenized logs.