OSSS.ai.orchestration.classification_bridge¶
OSSS.ai.orchestration.classification_bridge
¶
Helpers for passing classifier outputs into the LangGraph orchestrator.
Goal: - Give classifier / upstream HTTP layer a single, consistent way to attach classification results into the orchestrator config. - Orchestrator then reads these from:
* ``config["classifier"]`` (flat, legacy)
* ``config["execution_state"]["classifier_profile"]``
* ``config["execution_state"]["task_classification"]``
* ``config["execution_state"]["cognitive_classification"]``
attach_classifier_to_config_inplace(config, *, classifier_profile=None, task_classification=None, cognitive_classification=None)
¶
Mutate config in-place to attach classifier outputs in a canonical way.
This function:
- Puts a flat
config["classifier"]for legacy consumers. - Attaches a richer classifier profile into
config["execution_state"]["classifier_profile"]. - Mirrors task and cognitive classifications into
config["execution_state"]["task_classification"]andconfig["execution_state"]["cognitive_classification"].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Dict[str, Any]
|
Orchestrator configuration dictionary to be mutated in-place. |
required |
classifier_profile
|
Optional[ClassifierProfile]
|
Rich classifier profile (intent, domain, action, etc.). |
None
|
task_classification
|
Optional[TaskClassification]
|
Optional task-level classification details. |
None
|
cognitive_classification
|
Optional[CognitiveClassification]
|
Optional cognitive classification details. |
None
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
The same |
build_orchestrator_config(*, query, base_config=None, classifier_profile=None, task_classification=None, cognitive_classification=None)
¶
Convenience helper used by your HTTP / service layer.
Example usage:
clf = run_classifier(query)
config = build_orchestrator_config(
query=query,
base_config={
"execution_config": {"use_rag": True, "graph_pattern": "standard"},
},
classifier_profile=clf,
task_classification=clf.get("task_classification"),
cognitive_classification=clf.get("cognitive_classification"),
)
ctx = await orchestrator.run(query, config=config)
This ensures all the key fields are present for the orchestrator, including:
config["raw_query"]config["classifier"]config["execution_state"]["classifier_profile"]config["execution_state"]["task_classification"]config["execution_state"]["cognitive_classification"]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
str
|
Raw user query string. |
required |
base_config
|
Optional[Dict[str, Any]]
|
Optional base configuration to start from. |
None
|
classifier_profile
|
Optional[ClassifierProfile]
|
Rich classifier profile to attach. |
None
|
task_classification
|
Optional[TaskClassification]
|
Optional task-level classification details. |
None
|
cognitive_classification
|
Optional[CognitiveClassification]
|
Optional cognitive classification details. |
None
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
A fully populated orchestrator config dictionary. |