Promptflow Error unsupported operand type(s) for +: 'dict' and 'dict'

sambhav jain 10 Reputation points
2024-12-25T12:21:57.1166667+00:00

I have tried multiple prompt flows in different ways, but all of them are resulting in same error.

Flow work fine in prompt flow compute session Chat
It also works fine for first time in postman

Second time onwards, it starts throwing error.

I am using gpt-40-mini

unsupported operand type(s) for +: 'dict' and 'dict'

I have researched a lot and it looks like its because of _trace.py calculating tokens, which results in error. Has anyone else solved it.

Here is detailed error: 2024-12-25 12:02:02 +0000 22 execution.flow ERROR Flow execution has failed. Cancelling all running nodes: chat_with_context. [2024-12-25 12:02:02 +0000][pfserving-app][ERROR] - Flow run failed with error: {'message': "Execution failure in 'chat_with_context': (TypeError) unsupported operand type(s) for +: 'dict' and 'dict'", 'messageFormat': "Execution failure in '{node_name}'.", 'messageParameters': {'node_name': 'chat_with_context'}, 'referenceCode': 'Tool/promptflow.executor.flow_executor', 'code': 'UserError', 'innerError': {'code': 'ToolExecutionError', 'innerError': None}, 'additionalInfo': [{'type': 'ToolExecutionErrorDetails', 'info': {'type': 'TypeError', 'message': "unsupported operand type(s) for +: 'dict' and 'dict'", 'traceback': 'Traceback (most recent call last):\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1344, in wrapper\n return f(*args, **kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 562, in wrapped\n token_collector.collect_openai_tokens_for_parent_span(span)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 143, in collect_openai_tokens_for_parent_span\n merged_tokens = {\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 145, in <dictcomp>\n key: (self._span_id_to_tokens[parent_span_id].get(key, 0) or 0) + (tokens.get(key, 0) or 0)\nTypeError: unsupported operand type(s) for +: 'dict' and 'dict'\n', 'filename': '/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py', 'lineno': 145, 'name': '<dictcomp>'}}], 'debugInfo': {'type': 'ToolExecutionError', 'message': "Execution failure in 'chat_with_context': (TypeError) unsupported operand type(s) for +: 'dict' and 'dict'", 'stackTrace': '\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1034, in _exec\n output, aggregation_inputs = self._exec_inner_with_trace(\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 937, in _exec_inner_with_trace\n output, nodes_outputs = self._traverse_nodes(inputs, context)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1215, in _traverse_nodes\n nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1270, in _submit_to_scheduler\n return scheduler.execute(self._line_timeout_sec)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n raise e\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n each_node_result = each_future.result()\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n return self.__get_result()\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n raise self._exception\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n result = context.invoke_tool(node, f, kwargs=kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n result = self._invoke_tool_inner(node, f, kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 206, in _invoke_tool_inner\n raise ToolExecutionError(node_name=node_name, module=module) from e\n', 'innerException': {'type': 'TypeError', 'message': "unsupported operand type(s) for +: 'dict' and 'dict'", 'stackTrace': 'Traceback (most recent call last):\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n return f(**kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1344, in wrapper\n return f(*args, **kwargs)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 562, in wrapped\n token_collector.collect_openai_tokens_for_parent_span(span)\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 143, in collect_openai_tokens_for_parent_span\n merged_tokens = {\n File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 145, in <dictcomp>\n key: (self._span_id_to_tokens[parent_span_id].get(key, 0) or 0) + (tokens.get(key, 0) or 0)\n', 'innerException': None}}} [2024-12-25 12:02:02 +0000][pfserving-app][INFO] - Finish monitoring request, request_id: b42748db-9db5-4c52-9a45-4cf3fd2c3fa1, client_request_id: 542443f3-7049-4b7b-9f7b-dc117afe56b1.

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,507 questions
{count} vote

1 answer

Sort by: Most helpful
  1. Sina Salam 15,011 Reputation points
    2024-12-26T19:27:49.1333333+00:00

    Hello sambhav jain,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you're facing Promptflow system's internal token tracing logic** (_trace.py) issues.

    Understand that this is a platform-specific issue, here is a targeted resolution: What you need to do to resolve this:

    Step 1:

    Double-check the environment variables you set to disable token tracing. Ensure the following is correctly applied in your environment: PROMPTFLOW_TOKEN_TRACING_DISABLED=true

    If this variable is already set and the issue persists, proceed to Step 2.

    Step 2:

    The error was not present two weeks ago, as per @Deirdre O'Regan investigate whether a recent update to Promptflow (or its dependencies) introduced this issue:

    • Review the Azure Updates page for Promptflow or GPT-40-mini changes.
    • Check the Promptflow GitHub repository or release notes for known issues.

    Step 3: Temporary Workaround

    If no updates resolve the issue, consider using a previously working version of Promptflow. You may need to set up an isolated environment with an older version using bash: pip install promptflow==<previous_version>

    Step 4:

    Since this issue appears to be platform-related, and many are facing it: I will advice to Submit a support ticket on Azure Portal, including:

    • A detailed error log.
    • Steps to reproduce the issue (e.g., environment variables set, template used, etc.).
    • Confirmation that the issue persists even in Azure AI Foundry UI.

    While awaiting resolution from Azure:

    1. Test the flow using a local deployment if feasible.
    2. Avoid token tracing entirely by ensuring it is disabled, and the related logic is bypassed.

    @sambhav jain

    The issue indeed stems from Promptflow's internal logic (_trace.py) for token tracing. If disabling token tracing via the environment variable (PROMPTFLOW_TOKEN_TRACING_DISABLED=true) does not work, this is likely a bug introduced in a recent update. Please report this issue to Azure Support with detailed logs. In the meantime, consider testing with an earlier version of Promptflow or using a local deployment.

    @Deirdre O'Regan

    Since this error is occurring even when testing endpoints through Azure AI Foundry, it confirms a broader issue with the service. The error’s recent appearance suggests a possible regression or update-related bug. Please submit a support ticket to Azure with details of the templates and environment used. This will help Azure prioritize a fix.

    I hope this is helpful! Do not hesitate to let me know if you have any other questions.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.