n8n is a tool that runs workflows in Javascript or Python. So No.
this code was AI generated (for ya’ll’s reference) This is one node(Method)
I did not write this code or even the comments. Claude did.
I did run the code and help the AI debug it.
// Get the AI response from OpenAI node
const aiResponse = items[0].json;
// Get original data from Extract_Name node using n8n's $() function
const extractNameData = $('Extract_Name').first().json;
let enhancedData;
try {
// OpenAI node returns the content in message.content as a string
const content = aiResponse.message?.content || aiResponse.content || '';
// Parse the JSON response from OpenAI
if (typeof content === 'string') {
enhancedData = JSON.parse(content);
} else {
enhancedData = content;
}
console.log('AI Enhancement successful:', enhancedData);
} catch (error) {
console.error('Failed to parse AI response:', error);
console.log('AI Response was:', aiResponse);
// Fallback to original search terms if AI fails
enhancedData = {
enhanced_search_terms: extractNameData.search_terms,
reasoning: 'AI parsing failed, using original terms',
cultural_context: 'Unknown'
};
}
// Build the output in the format expected by Search_OFAC_Database
return [{
json: {
// Required fields for Search_OFAC_Database
original_name: extractNameData.original_name,
search_terms: enhancedData.enhanced_search_terms || extractNameData.search_terms,
search_count: (enhancedData.enhanced_search_terms || extractNameData.search_terms).length,
// Keep original data for reference
original_search_terms: extractNameData.search_terms,
ignored_words: extractNameData.ignored_words,
// AI enhancement details
ai_enhancement: {
reasoning: enhancedData.reasoning || 'No reasoning provided',
cultural_context: enhancedData.cultural_context || 'Unknown',
terms_added: (enhancedData.enhanced_search_terms || []).filter(term =>
!extractNameData.search_terms.includes(term)
)
},
// Debug info
debug: extractNameData.debug
}
}];
Here is what the Nodes look like in the IDE
I asked Claude to describe the code and detail it and this is what Claude said.
Me: Here is an N8N workflow. Can you please tell me exactly what it does and provide me a detailed flow of operations? Be detailed and inclusive so if someone ever needs to debug this, they have a nice descriptive file to do so. ( I then gave it the entire souce code base)
Claude:
This is an OFAC (Office of Foreign Assets Control) sanctions screening API workflow that checks names against a sanctions database. Here’s a detailed breakdown of what it does and how it operates:
Overview
This N8N workflow creates a webhook API endpoint that accepts a name as input, intelligently processes it to handle various cultural naming conventions, searches an OFAC sanctions database, and returns matching results with confidence scores. It also logs all queries to a Google Sheet for auditing purposes.
Detailed Flow of Operations
1. OFAC_Check_Webhook (Entry Point)
- Type: Webhook trigger
- Path:
/ofac-check
- Method: POST
- Purpose: Receives incoming requests with a name to check against the OFAC sanctions list
- Expected Input: JSON body with a
name
field (e.g., {"name": "Vladimir Putin"}
)
- Also Captures: IP address, headers, and other request metadata for logging
2. Bad Word Removal (Data Preparation)
- Type: Set node
- Purpose: Defines a comprehensive list of stop words to filter out during name processing
- Contains: 500+ common words including:
- Articles (a, an, the)
- Business terms (inc, llc, corporation)
- Industry-specific terms (banking, trucking, medical)
- Common adjectives and verbs
- Geographic indicators
- Religious and charitable organization terms
- Output: Passes the stop words array along with the original request data
3. Extract_Name (Initial Processing)
- Type: JavaScript code node
- Purpose: Extracts and processes the name from the webhook request
- Operations:
- Handles various input formats (string, JSON, URL-encoded)
- Decodes URL encoding
- Splits the name into individual words
- Filters out stop words and words with 2 or fewer characters
- Removes duplicates
- Prepares search terms for SQL queries
- Output:
original_name
: The unprocessed input
search_terms
: Filtered array of significant words
ignored_words
: Words that were filtered out
- Debug information
4. AI_Search_Enhancer (Intelligent Enhancement)
- Type: OpenAI integration
- Model: GPT-4-Turbo
- Purpose: Uses AI to identify culturally significant name components that shouldn’t be filtered
- Key Features:
- Recognizes Korean naming conventions (family name first)
- Identifies Arabic particles (al-, bin, ibn)
- Recognizes European particles (van, von, de)
- Handles Spanish/Portuguese naming (dos, das, del)
- Can identify when short words like “Un” in “Kim Jong Un” are significant
- Output: Enhanced search terms with cultural context
5. Format_AI_Response (AI Output Processing)
- Type: JavaScript code node
- Purpose: Parses the AI response and merges it with original data
- Error Handling: Falls back to original search terms if AI parsing fails
- Output: Formatted data ready for database search, including both original and AI-enhanced terms
6. Search_OFAC_Database (SQL Query Builder)
- Type: JavaScript code node
- Purpose: Builds sophisticated SQL queries based on the number and type of search terms
- Query Strategies:
- No terms: Exact full name match only
- Single term: Simple LIKE queries with scoring
- Multiple terms: Complex scoring system that prioritizes:
- Exact matches (score: 100)
- First + last name matches with middle names (score: 95)
- First name starts + exact last name (score: 90)
- All terms present (score: 85)
- Partial matches (score: 30-70)
- Output: Complete SQL query with scoring logic
7. Execute_OFAC_Query (Database Query)
- Type: Execute Command node
- Purpose: Runs the SQL query against the OFAC database
- Database: SQLite database at
/home/node/.n8n/ofac.db
- Command: Uses sqlite3 with JSON output format
- Returns: Raw JSON results from the database
8. Merge_Query_Data (Data Consolidation)
- Type: JavaScript code node
- Purpose: Combines query results with search metadata
- Merges:
- Database results (stdout, stderr, exit code)
- Original search parameters
- Search terms used
- SQL query for debugging
9. Format_Results (Response Formatting)
- Type: JavaScript code node
- Purpose: Formats the final API response
- Features:
- Parses database JSON results
- Calculates confidence scores if not provided by SQL
- Sorts results by confidence (highest first)
- Includes detailed match information for each result
- Fallback Confidence Calculation:
- Exact match: 100
- First + last name with middle names: 95
- First name + exact last name: 90
- All terms present: 75
- Partial matches: Based on term location and match quality
10. Send_Response (API Response)
- Type: Respond to Webhook node
- Purpose: Sends the formatted results back to the API caller
- Response Format:
{
"searched_name": "Vladimir Putin",
"search_terms": ["vladimir", "putin"],
"match_found": true,
"match_count": 1,
"matches": [{
"uid": "12345",
"full_name": "Vladimir Vladimirovich Putin",
"first_name": "Vladimir",
"last_name": "Putin",
"type": "individual",
"program": "UKRAINE-EO13662",
"confidence": 95
}]
}
11. Log To Google Sheets (Audit Trail)
- Type: Google Sheets node
- Purpose: Logs every query for audit and analytics
- Logs:
- Timestamp (ISO format)
- IP address of requester
- Name that was searched
- Sheet: Appends to “OFAClookup Queries” spreadsheet
Additional Components
Database Check Nodes (Inactive in main flow)
- Execute Command: Checks if the database exists and has entries
- Make sure the DB is there: Verifies database accessibility
Key Features
- Intelligent Name Processing: Handles various cultural naming conventions and doesn’t incorrectly filter significant short words
- Fuzzy Matching: Uses sophisticated SQL scoring to find matches even with spelling variations or middle names
- Cultural Awareness: AI enhancement ensures names from different cultures are searched appropriately
- Audit Trail: Every search is logged with timestamp and IP for compliance
- Error Handling: Graceful fallbacks at each stage if components fail
- Performance: Limits results to 100 matches and uses scoring to prioritize the most relevant ones
Debugging Information
The workflow provides extensive debugging data at each stage:
- Raw input parsing details
- Stop words that were filtered
- AI enhancement reasoning
- Complete SQL queries generated
- Database query results
- Confidence score calculations
This makes it easy to trace why a particular name did or didn’t match, and how the scoring was determined.
@npalardy i think this answers your "can you maintain it " question.