A python logger class for Azure using OpenCensus sdk
This repo is currently archived. All the code in this repo has been moved to Azure Samples
This document covers the python logger class using OpenCensus-Python sdk for Azure Application Insights.
Application monitoring is one of the important pillars for any software development project. To support monitoring and observability, software projects should be able to perform below tasks:
Currently application monitoring for python projects is integrated with Azure Application Insights using opencensus-python sdk.
Currently Azure Application Insights supports logging, distributed tracing, metric collection using opencensus-python sdk. To integrate any python project with opencensus sdk
, user has to write a lot of bootstrap code and this needs to be repeated for all the python projects in the application.
By implementing Python Logger class, all the bootstrap logic can be encapsulated inside a single class which then can be reused by multiple projects.
For example, here is the link to Azure documentation for opencensus-python sdk and to add a very basic logger and tracer to a flask application following code is required:
import logging
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.trace.samplers import ProbabilitySampler
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace.tracer import Tracer
from opencensus.trace import config_integration
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
APP_NAME = "Flask_App"
APP = Flask(APP_NAME)
config_integration.trace_integrations(["requests"])
config_integration.trace_integrations(["logging"])
def callback_add_role_name(envelope):
""" Callback function for opencensus """
envelope.tags["ai.cloud.role"] = APP_NAME
return True
app_insights_cs = "InstrumentationKey=" + APP_INSIGHTS_KEY
logger = logging.getLogger(__name__)
handler = AzureLogHandler(connection_string=app_insights_cs)
handler.add_telemetry_processor(callback_add_role_name)
logger.setLevel(logging.INFO)
logger.addHandler(handler)
azure_exporter = AzureExporter(connection_string=app_insights_cs)
azure_exporter.add_telemetry_processor(callback_add_role_name)
FlaskMiddleware(
APP, exporter=azure_exporter, sampler=ProbabilitySampler(rate=1.0),
)
tracer = Tracer(exporter=azure_exporter, sampler=ProbabilitySampler(1.0))
This is the bare minimum code required for creating logger and tracer. However, to make it production ready, more complex logic is required.
Following are some of the scenarios which are not covered in basic implementation.
Assuming, if we need to add code for above scenarios, then this will get repeated for every python project in our application. To avoid repetition and hence enabling code reuse, this python logger class is required.
AppLogger is the logging class which contains bootstrap code to initialize logger using opencensus python sdk.
class AppLogger:
def __init__(self, config=None):
"""Create an instance of the Logger class.
Args:
config:([dict], optional):
Contains the setting for logger {"log_level": "DEBUG","logging_enabled":"true"",
"app_insights_key":"<app insights key>"}
"""
...
def get_logger(self, component_name="AppLogger", custom_dimensions={}):
...
def get_tracer(self, component_name="AppLogger", parent_tracer=None):
...
Parameters used in initialization AppLogger:
dict
which takes config values required for logger.
config = {
"log_level": "DEBUG",
"logging_enabled": "true",
"app_insights_key": "<app_insights_instrumentation_key>",
}
log_level(optional): Log level can be set by passing desired value in config dict.
Following log levels are supported
Default value for
log_level
is set to “INFO”
logging_enabled(optional): This is used to enable or disable the logging. By default its value is set to “true”. To disable logging logging_enabled
can be set to false
. This is useful when we need to run unit tests and don’t want to send telemetry. Following could be scenarios where logging can be turned off:
Please make sure to set application insights key even when logging_enabled is set to “false” otherwise, it will throw exception during creation of logger.
APPINSIGHTS_INSTRUMENTATION_KEY="YOUR KEY"
. If application insights key is neither set in config, nor in environment variable, initialization of AppLogger will fail with exception.
Function get_logger
def get_logger(self, component_name="AppLogger", custom_dimensions={}):
get_logger
function adds AzureLogHandler to the logger and also adds ai.cloud.role
= component_name to the logger. It also adds the default dimensions and returns a logger object.
Function get_tracer
def get_tracer(self, component_name="AppLogger", parent_tracer=None):
get_tracer
function return a Tracer object. It sets the component_name to ai.cloud.role
which is used to identify component in application map. It also sets parent_tracer for proper correlation.
Function enable_flask
def enable_flask(self,flask_app,component_name="AppLogger"):
enable_flask
function enables Flask middleware as mentioned in the documentation. This is required to enable log tracing for flask applications.
Currently function for enabling flask is added but if application requires integration with other libraries like httplib, django, mysql, pymysql, pymongo, fastapi, postgresql etc then corresponding functions need to be added.
Parameters used in get_logger
and get_tracer
functions in AppLogger:
component_name: (optional): Default value of this param is “AppLogger”. Its always best practice to pass this parameter as the name of component in which logger is initialized eg “API3” etc. This will be help in filtering logs in application insights based on the component_name
. This will appear as cloud_RoleName
in app insight logs. Here is screenshot of application map in application insights:
parent_tracer:
This is required to set the correlation for tracing. Ideally this is required to set correlation of logs for modules which are called within same process.
Lets takes an example where we have main.py
which calls a function in package1
. Now, to correlate logs generated from main.py and package1, a tracer
is created in main.py
which is passed to package1_function
as parent_tracer
. In package1_function
a tracer is created where parent traceid
is set in get_tracer
function.
By correlating traceids, all the logs can be filtered with one traceid.
Please note that in Azure Application Insights,
traceid
is referred asoperation_id
.
# main.py
config={"app_insights_key"="<some_key>"}
app_logger= AppLogger(config)
tracer= app_logger.get_tracer(component_name="main")
# call package1 function
some_result = package1_function(app_logger=app_logger,parent_tracer= tracer)
# package1
config={"app_insights_key"="<some_key>"}
def package1_function(app_logger, parent_tracer=None):
app_logger = AppLogger(config)
tracer = app_logger.get_tracer(component_name="package1",parent_tracer)
Here is the screenshot of application insights showing correlation of traceids.
custom_dimensions: Custom dimensions is a dict containing ‘key:value’ pairs for setting default custom dimension for logger.
custom_dimenions = {
"key1":"value1",
"key2":"value2"
}
app_logger= AppLogger(config)
logger = app_logger.get_logger(self, component_name="Package1", custom_dimensions=custom_dimensions):
def my_method(self, **kwargs):
# Do something
# Then log
logger.warning("Some warning", extra=MyParam)
logger.info("Some info",extra=MyParam)
To track and log time of any function, tracer span can be used as shown in following example:
```py
config = {
"log_level": "DEBUG",
"logging_enabled": "true",
"app_insights_key": "<app_insights_instrumentation_key>",
}
app_logger = AppLogger(config)
logger = app_logger.get_logger(component_name="SomeModule")
tracer = app_logger.get_tracer(component_name="SomeModule")
with tracer.span("testspan"):
test_function(app_logger)
def test_function(app_logger=get_disabled_logger()):
pass
```
Above code generates entry in dependency table in application insights for target
= testspan
with time duration taken by test_function
. For example, in below screenshot we can see that time taken by util_func
is around 19 sec.
More info about dependency monitoring can be found here.
Unit tests for application using AppLogger
can use either logging_enabled
= false
or get_disabled_logger()
. This will disable logging during unit tests execution.
Following example shows the usage of logging_enabled
= false
and get_disabled_logger()
in two unit tests.
from SomeModule import my_method
import uuid
def test_my_method():
config = {
"log_level": "DEBUG",
"logging_enabled": "false",
"app_insights_key": str(uuid.uuid1()),
}
component_name = "TestComponent"
app_logger = AppLogger(
config=config
)
assert app_logger is not None
logger = app_logger.get_logger(component_name=component_name)
assert logger is not None
def test_my_method():
app_logger = get_disabled_logger()
logger = app_logger.get_logger()
assert logger is not None
Here are steps required to use AppLogger
pip install -r .\monitoring\requirements.txt
Checkout examples to see the usage of AppLogger
To execute unit test, use below command:
python -m unittest discover .\monitoring\tests\
Examples created using AppLogger can be found here
Julien Chomarat
Benjamin Guinebertière
Ankit Sinha
Prabal Deb
Megha Patil
Srikantan Sankaran
Frédéric Le Coquil
Anand Chugh