.. Manage Content ================= Manage Content ================= The power of L7|ESP depends on the strength of content developed by L7 Informatics and its customers. This section describes best practices for developing robust, easy-to-use content. Auto-Generated Sequence IDs ---------------------------- You can create and customize your own auto-generated sequence IDs for Entities and Experiments through the ``lab7.conf`` file as described in this section. Go to the L7|ESP installation directory and locate the ``lab7.conf`` file. Find the ``lims`` section and note that ``"ESP SEQUENCE"`` is the default sequence ID used to automatically generate sequence IDs for Entities. .. image:: images/auto-sequence-ID.png The ``"ESP SEQUENCE"`` generates unique sequence Entity IDs in the following format: ``ESP000001``, ``ESP000002``, ``ESP000003``, etc. .. image:: images/auto-sequence-ID-02.png Create Auto-Generated Sequence IDs for Entities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. In the ``lims`` section of the ``lab7.conf`` file, add your new sequence ID below ``"ESP SEQUENCE"`` as follows: * ``name``, which is what the sequence ID is referred to as in L7|ESP. * ``format``, which is a Python format string that defines how the sequence ID is formatted using one or more of the following arguments: - ``{sample_type}``, which includes the associated Entity Type. - ``{sample_number}``, which includes the value of the numbering sequence. - ``{date}``, which includes the date that the Entity was created in a "YYYY-MM-DD" format. - ``{time}``, which includes the time that the Entity was created in a "HHMMSS" format. - ``{datetime}``, which includes the date that the Entity was created in a "YYYY-MM-DD" format and the time that the Entity was created in a "HHMMSS" format. * ``sequence``, which is the name of the sequence (corresponds to the ``sequences`` section). .. image:: images/auto-sequence-ID-03.png 2. In the ``sequences`` section, add the ``sequence`` from Step 1 (e.g., ``patient_sequence``) followed by the number to start with (e.g. ``100``). .. image:: images/auto-sequence-ID-04.png The ``"PATIENT SEQUENCE"`` example generates unique Entity IDs in the following format: ``PATIENT0100``, ``PATIENT0101``, ``PATIENT0102``, etc. 3. Restart L7|ESP using the command ``l7 stop && l7 start``. To begin using the new Entity ID sequence, create a new Entity Type or update an existing Entity Type (refer to :ref:`Manage Sample Types`). Create Auto-Generated Sequence Names for Experiments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. In the ``id_sequences`` section of the ``lab7.conf`` file, add your new sequence below ``experiment`` as follows: * ``name``, which is what the sequence ID is referred to as in L7|ESP. * ``format``, which is a Python format string that defines how the sequence ID is formatted using one or more of the following arguments: - ``{workflow}``, which includes the associated **Workflow**. - ``{experiment_number}``, which includes the value of the numbering sequence. - ``{date}``, which includes the date that the **Entity** was created in a ``YYYY-MM-DD`` format. - ``{time}``, which includes the time that the **Entity** was created in a ``HHMMSS`` format. - ``{datetime}``, which includes the date that the **Entity** was created in a ``YYYY-MM-DD`` format and the time that the **Entity** was created in a ``HHMMSS`` format. * ``sequence``, which is the name of the sequence (corresponds to the ``sequences`` section). .. image:: images/auto-sequence-ID-06.png 2. In the ``sequences`` section, add the ``sequence`` name from Step 1 (e.g., ``test_sequence``) followed by the number to start with (e.g., ``01``). .. image:: images/auto-sequence-ID-07.png The ``"TEST SEQUENCE"`` example generates unique **Experiment Names** in the following format: ``TEST00001``, ``TEST00002``, ``TEST00003``, etc. 3. Restart L7|ESP using the command ``l7 stop && l7 start``. To begin using the new **Experiment Name** sequence, create a new **Workflow** or update an existing **Workflow** (refer to :ref:`Manage Workflows`). .. _Ontology Feature: Ontology Feature ----------------- The **Ontology** feature is a customizable template for enforcing predefined formats when data is being collected in a **Worksheet**. For example, if a **Protocol** contains a "Name" column for users to document their names, you can create and define a "Name" **Ontology** that requires the format of "First Name" and "Last Name" for that "Name" column. The **Ontology** feature is beneficial because it standardizes how data is captured across all users. To create an **Ontology**: 1. Go to the L7|ESP installation directory. Under the **data/reference** folder, create a **.json** file titled "attribute_ontology". 2. In the **.json** file, define your formats (refer to the examples below). .. code-block:: python { "name": "Sample Picklist", "type": "dropdown", "description": "Sample Species", "content": ["Mouse", "Fly", "Human"], "defaultValue": "Human", "multiple": false, "required": true } .. code-block:: python { "name": "Employee Data", "type": "category", "content": [ { "name": "First Name", "type": "text" }, { "name": "Last Name", "type": "text" }, { "name": "Age", "type": "numeric" }, { "name": "Actively Employed", "type": "checkbox", "defaultValue": true }, { "name": "Address", "type": "category", "content": [ { "name": "Street", "type": "text" }, { "name": "City", "type": "text" }, { "name": "State", "type": "text", "defaultValue": "Texas", "required": true }, { "name": "Zip", "type": "numeric" } ] } ] } 3. To activate the **Ontology** feature, go to the L7|ESP installation directory and locate the **__init__.py** file. Find the ``DEFAULT_CONFIG =`` section and add ``ontology_mode: TRUE,``. .. image:: images/activate-ontology.png Once activated, the **Ontology** feature is visible and can be used when adding or updating a column in a Protocol (refer to :ref:`Manage Protocols`). Banners --------- **System Admins** can create a **Banner** in L7|ESP to easily communicate information to all users. The **Banner** is displayed at the top of every page in L7|ESP until it is removed by the **Admin**. To create a **Banner**: 1. Log in as an **Admin**. 2. Paste ``/main/static/util.html`` after the ``.com`` in your L7|ESP URL. 3. From the API page: * In the **API Endpoint** field, enter ``/api/banner``. * In the **Method** field, enter ``POST``. * In the **Data** field, enter ``{"message" : "Your message here"}``. 4. Click **Execute**. .. image:: images/create-banner.png 5. Return to L7|ESP and the **Banner** will be visible. .. image:: images/view-banner.png To remove a **Banner**: 1. Log in as an **Admin**. 2. Paste ``/main/static/util.html`` after the ``.com`` in your L7|ESP URL. 3. From the API page: * In the **API Endpoint** field, enter ``/api/banner``. * In the **Method** field, enter ``POST``. * In the **Data** field, enter ``{"message" : ""}``. 4. Click **Execute**. .. image:: images/remove-banner.png 5. Return to L7|ESP and the **Banner** is gone. Data References ----------------- A key part of building usable content is using data input or ingested at a previous time for "smart" interactions, such as computing dilution volumes based on ingested concentrations. L7|ESP provides a number of expressions that allow for referencing content, all with different strengths. The following guidelines should generally be adhered to to ensure robust and reusable content. .. list-table:: Title :widths: 20 20 20 20 20 :header-rows: 1 * - Reference From - Reference To - Expression - Expression Location - Comments * - Protocol cell - Another cell within the same **Protocol** - ``column_value('other column')`` - Column default value. For instance: .. literalinclude:: columndefault.sh :language: bash :linenos: - L7 Informatics is considering a shorter alias such as “col” or “cell” as shorthand for this function. * - Protocol cell - Cell within another **Protocol** in the **Workflow** - ``column_value('other column', 'other protocol')`` - **Data Link** - In general, a **Protocol** should have all information it needs to perform its own calculations/computations. If external data is required, then the **Protocol** should create a (possibly hidden) column that is referenced within the **Protocol**. Then data linking can be used to link that column to another value within the **Workflow**. This makes the **Protocol**'s "input requirements" explicit. * - Protocol cell - Cell within a **Protocol** outside the current **Workflow** - ``tagged_value(['tag'])`` or ``tagged_value(['tag'], generation=-N)`` (``N`` being the number of generations up to search) - Preferably a **Data Link** to avoid embedding/hiding the external dependency too deep in the content - Use of ``column_value`` to grab values outside the current **Workflow** is possible but discouraged because it increases the fragility of all of the content. Use of either ``column_value`` or ``tagged_value`` in this context will result in an additional query at this point in time. * - Custom report (JavaScript) - Cell within a Protocol - ``column_value_for_uuid`` or ``tagged_value`` expressions, evaluated via ``POST`` to ``/api/expressions/eval``; alternatively, craft a custom query for your needs - Within the report - * - Custom bioinformatics scripts - Cells within a Protocol - - Use ``esp.models.Experiment`` to fetch the data - Use ``espclient.core.dataaccess`` objects to fetch the data. In particular, ``SheetValue`` (equivalent to ``column_value``), ``ProtocolValue`` (equivalent to ``column_value_for_uuid``), and ``TaggedValue`` (equivalent to ``tagged_value``) are all available. If your data may come from multiple different sources, you can chain the lookups using ``ChainedValue``. - Within the script - Most scripts will be focused on accessing data present within the current **Experiment**/**Worksheet**. Consequently, using ``esp.models`` to access and manipulate the data is the generally preferred mechanism. Use of ``espclient.core.dataaccess`` should be reserved for use-cases where the data needs to be fetchable from a variety of locations, such as fetching data from parent **Entities**, data outside the current **Workflow**, etc., or in cases where the data access is not known ahead of time (see, for instance, ``espclient.core.dataaccess:standard_chained_lookup`` which will convert a string like: ``"tag:concentration,qubit;protocol:Qubit.Concentration;fixed:"`` into a chained lookup call that will first try to find a value tagged with ``"concentration"`` and ``"qubit"``, fallback the "Concentration" column of the "Qubit" protocol, and, failing that, supply an empty string). Testing -------- 1. When creating ``espclient``-based **Workflow** or **Workflow Chain** tests: - Always provide a ``verify`` block that verifies calculations for every column with a default value (whether literal default or expression-based default). - When specifying **Entities** and providing **Entity** identifiers for those **Entity**, use a large-numbered identifier (e.g., ``SAM999901`` instead of ``SAM000001``). Use a large-numbered identifier because the client pushes the names directly without triggering any updates to the underlying database sequence, so if you use low-numbered identifiers and then attempt to perform manual testing, there will be UI confusion because you will easily wind up with multiple ``SAM000001`` **Entities**. 2. For accelerated development, create test cases for each **Workflow** so you can update and develop on that **Workflow** in isolation from other **Workflows**. 3. Create unit tests for **custom expression** functions. 4. Make sure all branches of **Workflow Chain** transitions are tested. 5. You still need one or more end-to-end (E2E) tests to verify that the full **Workflow Chain** works as expected. 6. You still need to walk through the entire **Workflow Chain** one or more time(s) manually, especially to verify any custom JavaScript (and/or to write Cypress tests). Custom JavaScript ------------------- Understand how to use the ``renderers.js`` and ``invokables.js`` files. In general: - ``renderers.js`` can be used to add custom column renderers. - Invokables can be used to add reusable functions that can be called from ``onrender`` and ``onchange`` handlers. Minimize the amount of JavaScript implemented directly in ``onrender``/``onchange`` handlers by putting the code in ``invokables.js`` instead. This increases productivity significantly by allowing the developer to update the ``invokables.js`` file and refresh the page without having to create a new experiment with each JavaScript code change. Complexity of Custom (Python) Expressions ------------------------------------------- The more complex the expression, the more likely there is an error in the expression. Complex expressions can only be tested by loading the actual **Workflow** content. In general, prefer registering custom expression functions that receive parameters and calculate the desired result. These functions can then be unit tested directly using standard Python testing tools, more quickly testing a larger number of edge cases. Pipelines vs. Invokables ------------------------- Both **Pipelines** and **Invokables** allow L7 Informatics to write custom Python that interacts with L7|ESP to perform automation tasks. **Pipelines** allow **Pipeline Protocols** to be included in the LIMS app to give you, the user, push-button automation. **Invokables** can be triggered from custom **Protocol** buttons, which also gives you push-button automation. So, when should you use a **Pipeline**/**Pipeline Protocol** and when should you use an **Invokable**? Below are a few guidelines. First, determine if the process requires provenance tracking (see flowchart below). .. image:: images/does-my-process-require-provenance.png :width: 400px If the process requires provenance, a **Pipeline** is your only option. If the process does not require provenance, consider the following questions: Does the process make changes to the database? Do all of the changes need to be in a single transaction? * If they do, then an **Invokable** is your only option, but clarify between "nice to be in a single transaction" vs. "**must** be in a single transaction". * In general, if a process is idempotent (i.e. you can run it a second time and no changes will be made to the system between what you did the first time), a transaction is helpful, but not required. * Idempotent operations would lean toward **Pipelines**. Not idempotent lean toward **Invokables**. Does the process run quickly? * Slow processes should generally be avoided in **Invokables** currently since they tie up a web worker until the process is finished. Does the process need to be run one time for all **Entities**, and the results of the process distributed across **Entities**, with different values for each **Entity**? * Doing this with a **Pipeline** generally works best with two **Protocols**; alternatively, it can be accomplished relatively easily with an **Invokable** + custom **Protocol** button. If there are no requirements dictating the use of a **Pipeline** vs. **Invokable**, then consider the user experience. * If the process needs to be executed 1 time for each **Entity** in a **Worksheet**, then a **Pipeline Protocol** makes the most sense because it gives you a button-per-**Entity** plus a nicer UX (e.g., progress monitoring and associated **Reports**/dialog). * If the process needs to be executed once for all **Entities** in a **Worksheet**, then an **Invokable** is usually the better UX, depending on the nature of the **Task**. The final consideration is development and maintenance burden. * **Invokables** are time-consuming to develop against because the internal server APIs they must use are more cumbersome to learn than the customer APIs. * They are more expensive to maintain because they are directly tied to the internal server APIs, so if those change, the **Invokables** must be updated. * In general, if the UX won't significantly suffer for the user, you should lean toward using **Pipelines** as the preferred mechanism of automation.