HOW TO LOG IN RUN FUNCTION OF SCORING SCRIPT ON AML

Nguyễn Thanh Tú 170 Reputation points
2024-11-07T04:31:49.1566667+00:00

Hi

I am facing an issue while trying to log information in the score.py (scoring script) during online endpoint deployment.

I’ve been able to log information in the init function, but I cannot log anything in the run function.

Below is my code:

connection_string = os.getenv("CONNECTION_STRING")
APPICATIONINSIGHTS_CONNECTION_STRING = connection_string
# Set up custom JSON formatter for logging
fmt = JsonFormatter("%(levelname)%(asctime)%(message)%(filename)%(funcName)%(lineno)")  # type: ignore[no-untyped-call]
 
# Set up a stream handler to output logs to stdout
sh = logging.StreamHandler(sys.stdout)
sh.setFormatter(fmt=fmt)  # Apply the custom JSON formatter to the stream handler
 
# Configure the root logger
logging.basicConfig(handlers=[sh], level=logging.INFO)
 
# Create a logger object for the current module
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)  # Set the logging level to INFO for this logger
 
# Add the stream handler to the logger
logger.addHandler(sh)
 
# Set up an Azure Log Handler to send logs to Azure Application Insights
azure_handler = AzureLogHandler(connection_string=APPICATIONINSIGHTS_CONNECTION_STRING)
logger.addHandler(azure_handler)  # Add the Azure log handler to the logger
 
workflow_name = os.getenv("WORKFLOW_NAME")
custom_dimensions = {"workflow_name": workflow_name}
 
def init() -> None:
    logger.info(f"connection string: {connection_string}", extra={"custom_dimensions": custom_dimensions})
    """Initalize the scoring process"""
    global model  # noqa: PLW0603
    global input_schema  # noqa: PLW0603
    # "model" is the path of the mlflow artifacts when the model was registered. For automl
    # models, this is generally "mlflow-model".
    model_path = Path(os.getenv("AZUREML_MODEL_DIR") or "") / "model"
    model = mlflow.pyfunc.load_model(model_path)
    input_schema = model.metadata.get_input_schema()
 
 
def run(raw_data: str | bytes) -> str:
    """_summary_
 
    Args:
    ----
        raw_data (str | bytes): _description_
 
    Raises:
    ------
        Exception: _description_
 
    Returns:
    -------
        str: _description_
    """
    # Parse incoming JSON data
    logger.info(f"raw data: {raw_data}", extra={"custom_dimensions": custom_dimensions})
    json_data = json.loads(raw_data)
    if "input_data" not in json_data:
        msg = "Request must contain a top level key named 'input_data'"
        raise Exception(msg)  # noqa: TRY002
 
    # Initializing model prediction
    serving_input = {"dataframe_split": json_data["input_data"]}
    data = infer_and_parse_data(serving_input, input_schema)
    predictions = model.predict(data)
 
    # Return JSON-formatted predictions
    result = StringIO()
    predictions_to_json(predictions, result)
    return result.getvalue()


Is there a way to log information in the run function? Do I need to reconfigure the logger or use a different approach?

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,976 questions
{count} votes

1 answer

Sort by: Most helpful
  1. navba-MSFT 25,500 Reputation points Microsoft Employee
    2024-11-13T04:23:49.9166667+00:00

    @Nguyễn Thanh Tú I'm glad to see you were able to resolve your issue. Thanks for posting your solution so that others experiencing the same thing can easily reference this. 

    .

     

    Since the Microsoft Q&A community has a policy that the question author cannot accept their own answer, they can only accept answers by others, I'll repost your solution in case you'd like to Accept the answer. 

    .

     

    Issue:

    You are facing an issue while trying to log information in the score.py (scoring script) during online endpoint deployment. You’ve been able to log information in the init function, but you cannot log anything in the run function.

    .

     

     

    Resolution:

    You had changed the logging to DEBUG level, and this helped to log the results and the print also works fine now.

     

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.