{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load the necessary libraries" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:27:56.480166Z", "iopub.status.busy": "2026-03-13T14:27:56.479894Z", "iopub.status.idle": "2026-03-13T14:28:00.526393Z", "shell.execute_reply": "2026-03-13T14:28:00.525494Z" } }, "outputs": [], "source": [ "from openai import OpenAI # Used for accessing LLMs using the OpenAI API\n", "import pandas as pd # Used for data manipulation\n", "import matplotlib.pyplot as plt # Used for plotting\n", "from huggingface_hub import login # Used to log in to Hugging Face and access datasets\n", "from pydantic import BaseModel, ValidationError # Used for validating the output from the LLM\n", "import json # Used for parsing JSON output from the LLM" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use Ollama, which allows us to run large language models locally on our machine without needing to access a cloud-based API. This is useful for testing and development purposes, as it allows us to work with LLMs without incurring costs or needing an internet connection. \n", "\n", "First, we need to download a pre-trained model that we can run locally using Ollama. We can do this using the ollama pull command in the terminal. For example, we can download the Gemma 3 270m model, which is a smaller version of the Gemma 3 family that can run on a local machine without requiring a GPU." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:00.528899Z", "iopub.status.busy": "2026-03-13T14:28:00.528536Z", "iopub.status.idle": "2026-03-13T14:28:00.531320Z", "shell.execute_reply": "2026-03-13T14:28:00.530672Z" } }, "outputs": [], "source": [ "#!ollama pull gemma3:270m" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:00.533923Z", "iopub.status.busy": "2026-03-13T14:28:00.533706Z", "iopub.status.idle": "2026-03-13T14:28:00.536548Z", "shell.execute_reply": "2026-03-13T14:28:00.535752Z" } }, "outputs": [], "source": [ "model = \"gemma3:270m\" # The name of the model we downloaded and want to use with Ollama" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To use the Ollama API, we then need to start the Ollama server on our machine. We can do this using the following command in the terminal:\n", "\n", "```bash\n", "ollama serve\n", "```\n", "\n", "This will start the Ollama server and make the LLM API available at http://localhost:11434/v1. For convenience, we can also start the Ollama server from within our Python code using the subprocess module, which allows us to run shell commands from Python." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:00.538658Z", "iopub.status.busy": "2026-03-13T14:28:00.538461Z", "iopub.status.idle": "2026-03-13T14:28:00.550253Z", "shell.execute_reply": "2026-03-13T14:28:00.549076Z" } }, "outputs": [], "source": [ "import subprocess\n", "process = subprocess.Popen([\"ollama\", \"serve\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have the Ollama server running, we can create a client to access the LLM API. The OpenAI library provides a convenient interface for accessing the LLM API, and we can use it to create a client that connects to our local Ollama server." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:00.552850Z", "iopub.status.busy": "2026-03-13T14:28:00.552592Z", "iopub.status.idle": "2026-03-13T14:28:00.646257Z", "shell.execute_reply": "2026-03-13T14:28:00.645113Z" } }, "outputs": [], "source": [ "client = OpenAI(\n", " base_url = \"http://localhost:11434/v1\", # Ollama endpoint for accessing the local LLM API\n", " api_key = \"\" # No API key is needed for the local Ollama API, but we need to provide an empty string to avoid authentication errors\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can use the client to send a request to the LLM API and get a response. For example, we can send a simple chat completion request to the API and print the response." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:00.648594Z", "iopub.status.busy": "2026-03-13T14:28:00.648389Z", "iopub.status.idle": "2026-03-13T14:28:02.781204Z", "shell.execute_reply": "2026-03-13T14:28:02.779895Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "assistant: Inflation is the increase in the general price level of goods and services in an economy over a period of time. It is the general rate at which prices in an economy are rising.\n", "\n" ] } ], "source": [ "response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"What is inflation?\"},\n", " ]\n", ")\n", "\n", "print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we are sending a chat completion request to the LLM API using the \"gemma3:270m\" model, which is the smallest version of Gemma 3, an open-source model by Google, that can run on a local machine without requiring a GPU. The request includes a system message that sets the context for the conversation and a user message that asks the question \"What is inflation?\". The response from the API includes a message from the assistant role that provides an answer to the user's question.\n", "\n", "Using APIs can be very costly if we are making a large number of requests. We can check how many tokens we have used in our request and response to get an idea of the cost of using the API. The OpenAI library provides a convenient way to access the token usage information from the response." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:02.818153Z", "iopub.status.busy": "2026-03-13T14:28:02.817927Z", "iopub.status.idle": "2026-03-13T14:28:02.821491Z", "shell.execute_reply": "2026-03-13T14:28:02.820709Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tokens used: 59\n", "Input tokens: 21\n", "Output tokens: 38\n" ] } ], "source": [ "print(f\"Tokens used: {response.usage.total_tokens}\")\n", "print(f\"Input tokens: {response.usage.prompt_tokens}\")\n", "print(f\"Output tokens: {response.usage.completion_tokens}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":::{.callout-note}\n", "#### API Pricing\n", "Since we are using a local model with Ollama, we do not incur any costs for using the API. However, when using cloud-based APIs, costs are typically charged per token (both input and output). For example, at the time of writing (early 2026), OpenAI charges \\$1.75 per million input tokens and \\$14.00 per million output tokens for GPT 5.2. Pricing changes frequently, so always check the provider's current pricing page before running large batch jobs.\n", ":::\n", "\n", "\n", "### Exploring the Probabilistic Nature of LLMs\n", "\n", "LLMs are probabilistic models since they generate output based on a probability distribution over possible next tokens. This is why the same input can lead to different outputs each time we send a request to the API. The model samples from this probability distribution to generate its response, which can lead to variability in the output even for the same input. This is an important aspect of LLMs to keep in mind, especially when using them for tasks that require consistency or when evaluating their performance, as the variability in the output can affect the results.\n", "\n", "Consider the following function that sends a request to the LLM API to answer a question about the most beautiful city in Europe. If we call this function multiple times, we may get different answers each time due to the probabilistic nature of the model." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:02.823750Z", "iopub.status.busy": "2026-03-13T14:28:02.823556Z", "iopub.status.idle": "2026-03-13T14:28:02.826680Z", "shell.execute_reply": "2026-03-13T14:28:02.826025Z" } }, "outputs": [], "source": [ "def city_question():\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"What is the most beautiful city in Europe? Answer with one word.\"},\n", " ]\n", " )\n", "\n", " return response.choices[0].message.content" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can call this function multiple times to see the variability in the output." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:02.828940Z", "iopub.status.busy": "2026-03-13T14:28:02.828741Z", "iopub.status.idle": "2026-03-13T14:28:04.907374Z", "shell.execute_reply": "2026-03-13T14:28:04.906189Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Paris\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Istanbul\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Paris\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 4: Paris\n", "\n", "Run 5: Paris\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 6: Rome\n", "\n", "Run 7: Paris\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 8: Rome\n", "\n", "Run 9: Paris\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 10: Paris\n", "\n" ] } ], "source": [ "for ii in range(10):\n", " print(f\"Run {ii + 1}: {city_question()}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that it often answers with \"Paris\", but sometimes with \"Rome\", \"Barcelona\", or other cities. This illustrates an important property of LLMs: because they sample from a probability distribution over tokens, there is no guarantee that the same input will produce the same output. This variability is particularly relevant for tasks that require reliability and consistency, such as automated data extraction or classification pipelines. Furthermore, we also have no guarantee that the output will be factually correct, as the model may generate plausible-sounding but incorrect answers (known as \"hallucinations\").\n", "\n", "\n", "#### Temperature and Sampling {.unnumbered}\n", "\n", "When we send a request to the LLM API, we can also specify some parameters that control the behavior of the model. For example, we can specify the temperature parameter, which adjusts the probability distribution of the model's output. A higher temperature will result in more random output, while a lower temperature will result in more deterministic output." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:04.910419Z", "iopub.status.busy": "2026-03-13T14:28:04.910140Z", "iopub.status.idle": "2026-03-13T14:28:04.914773Z", "shell.execute_reply": "2026-03-13T14:28:04.913660Z" } }, "outputs": [], "source": [ "def ask_question(question, temperature=0.7, top_p=0.9):\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": question},\n", " ],\n", " temperature=temperature, # Higher temperature for more random output\n", " top_p=top_p, # Nucleus sampling parameter to control the diversity of the output\n", " )\n", "\n", " return response.choices[0].message.content" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can ask the same question with different temperature settings to see how it affects the output of the model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:04.917833Z", "iopub.status.busy": "2026-03-13T14:28:04.917551Z", "iopub.status.idle": "2026-03-13T14:28:07.251121Z", "shell.execute_reply": "2026-03-13T14:28:07.249770Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Temperature = 0.0 (Least random output):\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Simone Biles\n", "\n", "\n", "Temperature = 0.7 (More random output):\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Simone Biles\n", "\n", "\n", "Temperature = 2.0 (Even more random output):\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Michael Phelps\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Michael Jordan\n", "\n" ] } ], "source": [ "question = \"Who is the best Olympic athlete of all time? Only provide the name without any explanations or additional text.\"\n", "\n", "print(\"Temperature = 0.0 (Least random output):\")\n", "\n", "for ii in range(3):\n", " print(f\"Run {ii + 1}: {ask_question(question, temperature=0.0)}\")\n", "\n", "print(\"\\nTemperature = 0.7 (More random output):\")\n", "\n", "for ii in range(3):\n", " print(f\"Run {ii + 1}: {ask_question(question, temperature=0.7)}\")\n", "\n", "print(\"\\nTemperature = 2.0 (Even more random output):\")\n", "\n", "for ii in range(3):\n", " print(f\"Run {ii + 1}: {ask_question(question, temperature=2.0)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly, we can also specify the top_p parameter, which controls the nucleus sampling of the model's output. For example, we can set top_p to 0.9, which means that the model will only consider the top 90% of the probability mass when generating the output. Thus, a lower top_p will result in more focused output, while a higher top_p will result in more diverse output." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:07.255418Z", "iopub.status.busy": "2026-03-13T14:28:07.255019Z", "iopub.status.idle": "2026-03-13T14:28:08.627842Z", "shell.execute_reply": "2026-03-13T14:28:08.626354Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Top-p = 0.1 (More focused output):\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Simone Biles\n", "\n", "\n", "Top-p = 0.9 (More diverse output):\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 1: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 2: Simone Biles\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Run 3: Simone Biles\n", "\n" ] } ], "source": [ "print(\"Top-p = 0.1 (More focused output):\")\n", "for ii in range(3):\n", " print(f\"Run {ii + 1}: {ask_question(question, top_p=0.1)}\")\n", "\n", "print(\"\\nTop-p = 0.9 (More diverse output):\")\n", "for ii in range(3):\n", " print(f\"Run {ii + 1}: {ask_question(question, top_p=0.9)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that if neither temperature nor top_p is specified, the model will use the default values. These default values may differ between models and providers.\n", "\n", "\n", "### Zero-Shot and Few-Shot Classification\n", "\n", "For this section, we will again use a pre-labeled dataset for sentence-level sentiment analysis of ECB speeches [@Pfeifer2023], which is available on Hugging Face ([Central Bank Communication Dataset](https://huggingface.co/datasets/Moritz-Pfeifer/CentralBankCommunication)). The dataset contains sentences from ECB speeches that have been labeled as positive or negative in terms of sentiment.\n", "\n", "Let's load the dataset into a pandas DataFrame" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:08.631027Z", "iopub.status.busy": "2026-03-13T14:28:08.630746Z", "iopub.status.idle": "2026-03-13T14:28:09.195289Z", "shell.execute_reply": "2026-03-13T14:28:09.194269Z" } }, "outputs": [], "source": [ "df = pd.read_csv(\"hf://datasets/Moritz-Pfeifer/CentralBankCommunication/Sentiment/ECB_prelabelled_sent.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can define a function that takes a sentence as input and uses the LLM API to classify the sentiment of the sentence as positive or negative. We will use a zero-shot classification approach, where we provide the model with a prompt that describes the task and the possible labels, but we do not provide any examples of labeled sentences." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:09.198098Z", "iopub.status.busy": "2026-03-13T14:28:09.197845Z", "iopub.status.idle": "2026-03-13T14:28:09.202354Z", "shell.execute_reply": "2026-03-13T14:28:09.201134Z" } }, "outputs": [], "source": [ "def classify_sentiment(sentence):\n", "\n", " prompt = f\"\"\"Read the following sentence from a central bank speech and decide whether it expresses an optimistic or pessimistic view of the economy.\n", "\n", " Sentence: \"{sentence}\"\n", "\n", " Answer with exactly one word: 'positive' if optimistic, 'negative' if pessimistic.\"\"\"\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant that classifies the sentiment of sentences.\"},\n", " {\"role\": \"user\", \"content\": prompt},\n", " ]\n", " )\n", "\n", " return response.choices[0].message.content.strip().lower()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can apply this function to the sentences in our dataset to get the predicted sentiment labels from the LLM API." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:09.205181Z", "iopub.status.busy": "2026-03-13T14:28:09.204935Z", "iopub.status.idle": "2026-03-13T14:28:13.800364Z", "shell.execute_reply": "2026-03-13T14:28:13.799067Z" } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sentencetrue_sentimentpredicted_sentiment
94947 in any case economic interdependence will i...1negative
950over the last 15 years the financial sector ha...1negative
951first the integration of our economies and wit...1negative
952we expect that further deregulation as well as...1negative
953output growth has been gathering pace througho...1negative
954in this regard the crisis has uncovered four s...0negative
955the second challenge concerns another aspect o...0negative
9566 such differences in institutional quality ar...0negative
957third a number of other factors show up in an ...0negative
958in 2007 the us current account deficit amounte...0negative
\n", "
" ], "text/plain": [ " sentence true_sentiment \\\n", "949 47 in any case economic interdependence will i... 1 \n", "950 over the last 15 years the financial sector ha... 1 \n", "951 first the integration of our economies and wit... 1 \n", "952 we expect that further deregulation as well as... 1 \n", "953 output growth has been gathering pace througho... 1 \n", "954 in this regard the crisis has uncovered four s... 0 \n", "955 the second challenge concerns another aspect o... 0 \n", "956 6 such differences in institutional quality ar... 0 \n", "957 third a number of other factors show up in an ... 0 \n", "958 in 2007 the us current account deficit amounte... 0 \n", "\n", " predicted_sentiment \n", "949 negative \n", "950 negative \n", "951 negative \n", "952 negative \n", "953 negative \n", "954 negative \n", "955 negative \n", "956 negative \n", "957 negative \n", "958 negative " ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.DataFrame({\n", " \"sentence\": df[\"text\"].iloc[949:959],\n", " \"true_sentiment\": df[\"sentiment\"].iloc[949:959],\n", " \"predicted_sentiment\": df[\"text\"].iloc[949:959].apply(classify_sentiment)\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seems to predict always negative even though the sentences are positive. This is likely partially due to the fact that the model is very small (only 270 million parameters) and has limited reasoning capabilities.\n", "\n", "Let's provide the model with a few examples of sentences labeled as positive or negative to see if it can learn from these examples and improve its predictions. This is known as few-shot classification." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:13.803399Z", "iopub.status.busy": "2026-03-13T14:28:13.803114Z", "iopub.status.idle": "2026-03-13T14:28:13.808234Z", "shell.execute_reply": "2026-03-13T14:28:13.807275Z" } }, "outputs": [], "source": [ "def classify_sentiment_few_shot(sentence):\n", "\n", " prompt = f\"\"\"Read the following sentence from a central bank speech and decide whether it expresses an optimistic or pessimistic view of the economy.\n", "\n", " Sentence: \"{sentence}\"\n", "\n", " Here are some examples of sentences labeled as positive or negative:\n", "\n", " - positive: \"over the last 15 years the financial sector has grown significantly faster than other parts of the economy.\"\n", " - negative: \"in all scenarios a deep recession is envisaged in the severe scenario real gdp would fall by 12 percent in 2020\"\n", " - positive: \"first the integration of our economies and with it the convergence of our member states has also greatly increased\"\n", " - negative: \"on the other hand if the fiscal starting position is not particularly solid when an economic downturn sets in there may come a point where budget deficits become excessive\"\n", "\n", " Answer with exactly one word: 'positive' if optimistic, 'negative' if pessimistic.\"\"\"\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant that classifies the sentiment of sentences.\"},\n", " {\"role\": \"user\", \"content\": prompt},\n", " ]\n", " )\n", "\n", " return response.choices[0].message.content.strip().lower()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can apply this few-shot classification function to the sentences in our dataset to see if it improves the predictions." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:13.811184Z", "iopub.status.busy": "2026-03-13T14:28:13.810890Z", "iopub.status.idle": "2026-03-13T14:28:26.213654Z", "shell.execute_reply": "2026-03-13T14:28:26.212524Z" } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sentencetrue_sentimentpredicted_sentiment
94947 in any case economic interdependence will i...1negative
950over the last 15 years the financial sector ha...1negative
951first the integration of our economies and wit...1negative
952we expect that further deregulation as well as...1negative
953output growth has been gathering pace througho...1negative
954in this regard the crisis has uncovered four s...0positive
955the second challenge concerns another aspect o...0negative
9566 such differences in institutional quality ar...0negative
957third a number of other factors show up in an ...0negative
958in 2007 the us current account deficit amounte...0negative
\n", "
" ], "text/plain": [ " sentence true_sentiment \\\n", "949 47 in any case economic interdependence will i... 1 \n", "950 over the last 15 years the financial sector ha... 1 \n", "951 first the integration of our economies and wit... 1 \n", "952 we expect that further deregulation as well as... 1 \n", "953 output growth has been gathering pace througho... 1 \n", "954 in this regard the crisis has uncovered four s... 0 \n", "955 the second challenge concerns another aspect o... 0 \n", "956 6 such differences in institutional quality ar... 0 \n", "957 third a number of other factors show up in an ... 0 \n", "958 in 2007 the us current account deficit amounte... 0 \n", "\n", " predicted_sentiment \n", "949 negative \n", "950 negative \n", "951 negative \n", "952 negative \n", "953 negative \n", "954 positive \n", "955 negative \n", "956 negative \n", "957 negative \n", "958 negative " ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.DataFrame({\n", " \"sentence\": df[\"text\"].iloc[949:959],\n", " \"true_sentiment\": df[\"sentiment\"].iloc[949:959],\n", " \"predicted_sentiment\": df[\"text\"].iloc[949:959].apply(classify_sentiment_few_shot)\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This does not seem to work much better. This is likely because the model is still very small and not that smart, and also because the examples we provided may not be representative enough of the sentences in our dataset. In practice, few-shot classification can work well with larger and more powerful models, and with carefully chosen examples that are representative of the task at hand.\n", "\n", "\n", "### Structured Output Generation\n", "\n", "LLMs can also be used to generate structured output, such as JSON or XML, which can be useful for tasks that require a specific format for the output. For example, we can ask the model to extract specific information from a text and return it in a structured format." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:26.216628Z", "iopub.status.busy": "2026-03-13T14:28:26.216341Z", "iopub.status.idle": "2026-03-13T14:28:26.221163Z", "shell.execute_reply": "2026-03-13T14:28:26.219996Z" } }, "outputs": [], "source": [ "def extract_information(sentence):\n", "\n", " prompt = f\"\"\"Extract the growth rate from the following sentence: \"{sentence}\"\n", "\n", " Example: If the growth rate is X%, return it in the following JSON format:\n", " {{\n", " \"growth_rate\": X\n", " }}\n", " \"\"\"\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages=[\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant that extracts structured information from sentences.\"},\n", " {\"role\": \"user\", \"content\": prompt},\n", " ]\n", " )\n", "\n", " return response.choices[0].message.content.strip()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can apply this function to a sentence in our dataset to see how well it extracts the information and returns it in the specified JSON format." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:26.224109Z", "iopub.status.busy": "2026-03-13T14:28:26.223898Z", "iopub.status.idle": "2026-03-13T14:28:27.078210Z", "shell.execute_reply": "2026-03-13T14:28:27.076870Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "```json\n", "{\n", " \"growth_rate\": 1.4\n", "}\n", "```\n" ] } ], "source": [ "llm_output = extract_information(\"The economy is expected to grow by 1.4% in the next quarter.\")\n", "print(llm_output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfortunately, the LLM also outputs json code fences, e.g., ```json ... ```, which prevents us from parsing the output directly as JSON. Thus, we need to remove the code fences from the output before we can parse it as JSON." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:27.081171Z", "iopub.status.busy": "2026-03-13T14:28:27.080885Z", "iopub.status.idle": "2026-03-13T14:28:27.085453Z", "shell.execute_reply": "2026-03-13T14:28:27.084121Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"growth_rate\": 1.4\n", "}\n" ] } ], "source": [ "llm_output = llm_output.replace(\"```json\", \"\").replace(\"```\", \"\").strip()\n", "print(llm_output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can parse the output as JSON and validate it using Pydantic to ensure that it has the correct structure and data types. First, we need to define a Pydantic model that specifies the expected structure of the output." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:27.088418Z", "iopub.status.busy": "2026-03-13T14:28:27.088142Z", "iopub.status.idle": "2026-03-13T14:28:27.092597Z", "shell.execute_reply": "2026-03-13T14:28:27.091539Z" } }, "outputs": [], "source": [ "class GrowthRate(BaseModel):\n", " growth_rate: float" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can parse the output from the LLM and validate it against the Pydantic model." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:27.094849Z", "iopub.status.busy": "2026-03-13T14:28:27.094646Z", "iopub.status.idle": "2026-03-13T14:28:27.099434Z", "shell.execute_reply": "2026-03-13T14:28:27.098570Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "growth_rate=1.4\n", "Extracted growth rate: 1.4%\n" ] } ], "source": [ "try:\n", " parsed = json.loads(llm_output)\n", " validated = GrowthRate(**parsed)\n", " print(validated)\n", " print(f\"Extracted growth rate: {validated.growth_rate}%\")\n", "except (json.JSONDecodeError, ValidationError) as e:\n", " print(\"Validation failed:\", e)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Expanding Capabilities of LLMs with Tools\n", "\n", "LLMs can also be used in combination with external tools to perform tasks that require capabilities beyond text generation. Suppose we want our LLM to generate random numbers as part of its output. A naive approach would be to ask the LLM to generate a random number directly in its response. " ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:27.101638Z", "iopub.status.busy": "2026-03-13T14:28:27.101437Z", "iopub.status.idle": "2026-03-13T14:28:27.104752Z", "shell.execute_reply": "2026-03-13T14:28:27.103835Z" } }, "outputs": [], "source": [ "def generate_random_number():\n", "\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a professional random number generator. You only reply with a uniformly distributed random number between 1 and 100. Do not provide any explanations or additional text.\"},\n", " {\"role\": \"user\", \"content\": \"Generate a random number now.\"},\n", " ]\n", " )\n", "\n", " return response.choices[0].message.content" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then call this function to get a random number from the LLM." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:28:27.106954Z", "iopub.status.busy": "2026-03-13T14:28:27.106747Z", "iopub.status.idle": "2026-03-13T14:31:52.642896Z", "shell.execute_reply": "2026-03-13T14:31:52.641490Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "25 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "50 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "72\n", "```\n", "\n", "75 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "1\n", "```\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "100 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "125 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "150 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: 1. 1.\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "175 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "1\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "200 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "225 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "250 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "275 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "1\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "300 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```python\n", "import random\n", "\n", "random_number = random.uniform(1, 100)\n", "print(random_number)\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: Random number 1\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n", "\n", "325 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "350 of 1000...\n", "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "375 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "400 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```python\n", "import random\n", "print(random.randint(1, 100))\n", "```\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "425 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: 1. 0\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "450 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "475 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "500 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: 12.5\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "525 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "550 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "575 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```python\n", "import random\n", "\n", "random_number = random.uniform(1, 100)\n", "print(random_number)\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "600 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "625 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n", "650 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "675 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "700 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```python\n", "import random\n", "```\n", "725 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ...1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "750 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "775 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "800 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 123\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ```\n", "0\n", "```\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: Random number 1\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "825 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "850 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "875 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "900 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Non-numeric response generated: ...1\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "925 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "950 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "975 of 1000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Invalid number generated: 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "1000 of 1000...\n" ] } ], "source": [ "random_numbers = []\n", "N = 1000\n", "\n", "for ii in range(N):\n", "\n", " # Generate a random number\n", " random_number = generate_random_number()\n", "\n", " # Check if the response is a valid number between 1 and 100\n", " try:\n", " random_number = int(random_number)\n", " if 1 <= random_number <= 100:\n", " random_numbers.append(random_number)\n", " else:\n", " print(f\"Invalid number generated: {random_number}\")\n", " except ValueError:\n", " print(f\"Non-numeric response generated: {random_number}\")\n", "\n", " if (ii + 1) % 25 == 0:\n", " print(f\"{ii + 1} of {N}...\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can plot a histogram of the generated random numbers to see if they are uniformly distributed between 1 and 100." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2026-03-13T14:31:52.646384Z", "iopub.status.busy": "2026-03-13T14:31:52.646076Z", "iopub.status.idle": "2026-03-13T14:31:52.898593Z", "shell.execute_reply": "2026-03-13T14:31:52.897591Z" } }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjsAAAHFCAYAAAAUpjivAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjgsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvwVt1zgAAAAlwSFlzAAAPYQAAD2EBqD+naQAARJZJREFUeJzt3Q1cVHXe//8PAiIQkkKCrrelad5U3pRlbmgq5m1lpWWmpl2rURapmWa7WltitiK1bloul5qkWLvaVe16g2WU61reZKm5lkWKBpFmgHeIeP6Pz3d/M/+Z4UbEwYHD6/l4HJ05852Zc86cmfPme3OOn2VZlgAAANhULV8vAAAAQGUi7AAAAFsj7AAAAFsj7AAAAFsj7AAAAFsj7AAAAFsj7AAAAFsj7AAAAFsj7AAAAFsj7OC8lixZIn5+frJt27YSHx84cKA0b97cbZ7eHz169AVt3c2bN8vMmTPl119/5VMpp5UrV0q7du0kODjYfEY7d+4ss3xGRoY8/vjjcs0110hoaKjUqVPHfFYjRoyQjRs3ip1OqP7111+b/emHH37w+mvr6+r2Ph/9Dmg5x1S7dm256qqrZPLkyZKXlye+0qNHDzNVFY7t2aBBA8nPzy/2uO6j+jtTFX//UD0QdlApVq9eLb///e8vOOw899xzhJ1y+vnnn+XBBx80B8+1a9fKv//9b7n66qtLLf/ee+9Jhw4dzP+jRo0yn9G6devM53T06FG57bbb5KOPPhI7hR3dnyoj7FwIDaL62eik275nz54yd+5cueeee3y6XFV1n54zZ46vFwM2FODrBYA9dezYUaqbwsJC8xdcQED1+Fp88803Zpm1ViYmJqbMst99953cf//9phZow4YNUrduXedj+tyxY8fKxx9/LPXq1ZOq6uTJkxISEiLVTa1ateSmm25y3r/99tvl+++/l7S0NFPT1qJFC58uX1Wi22bevHny6KOPSnR0tNQk1XX/ri6o2UGl8GzGOnfunLzwwgvSunVr85fu5ZdfLtdee6288sorzmrsp556ytzWH39Htb8egB3P17/42rRpI0FBQaa6e+TIkXLo0CG399VmmFmzZkmzZs1ME02XLl3MQcWz2l5fV19/2bJlMmnSJPnNb35jXnf//v3mr8u4uDhp27atXHbZZea9tNbj008/dXsvrTHQ13j55ZflpZdeMuus66bv4wgiU6dOlUaNGkl4eLjcddddkpOTU67tpzUAN998s/nxCwsLkz59+piaAQfdtt27dze3hw0bZpajrGaJxMRE82P62muvuQUdV/r86667zm3et99+K8OHDzfbQLePNn/95S9/cSvj2JYrVqyQ6dOnm/XV9+jdu7fs27ev2Pto2OrVq5cpo+t3yy23yIcfflhis8aOHTtMDYiGMK3BUtqccN999zm3t/6vQe7AgQNuTQ/33nuvua01KY79SedfyHKof/zjH3L99deb9dd9809/+pNcLN0v1U8//eScp/veQw89JK1atTLLo/vkoEGDZNeuXRXe3vp90O+N4/vQqVMnWbNmTYnLdPDgQROcXT9rrYHS796l2OeV/kacPXvWfP5lcWwDx++D5/K5fs76XdHv8X/+8x/p27evab5t2LChzJ492zy+ZcsW813S+VozunTp0hLf89ixY+bzqV+/vimrn42GVm/v3/qaun/rNtTPISoqyrze+ZqocR561XOgLIsXL9aOHNaWLVuswsLCYlP//v2tZs2auT1H748aNcp5PyEhwfL397dmzJhhffjhh9batWutpKQka+bMmebxzMxMa8KECeZ9Vq1aZf373/82U25urnn8d7/7nXnsscceM89duHChdcUVV1hNmjSxfv75Z+f7TJs2zZTT8lpu0aJFVtOmTa2GDRtaMTExznIbN2405X7zm99Y99xzj/Xee+9ZH3zwgXX06FHrP//5j/XII49Yqamp1scff2zmjx071qpVq5Z5nkNGRoZ5DV3XQYMGmXIpKSlWVFSUdfXVV1sPPvigNWbMGGvNmjVmeS+77DJT7nzeeust87qxsbHWu+++a61cudLq3LmzVbt2bevTTz81Zfbv32/95S9/MeVmzZplttWePXtKfc1WrVqZbXAh9PXCw8OtDh06WG+++aa1fv16a9KkSWY7OD43123ZvHlz64EHHrD+8Y9/WCtWrDDbXd/37NmzzrLLli2z/Pz8rDvvvNN8zu+//741cOBAs29s2LDBWU73E8e2ffrpp620tDSzLdQ777xj/eEPf7BWr15tpaenm89JP1vdHxz7Qk5Ojtku+hq6nRz7k86/kOXQ2zqve/fuppy+9w033GDWrTw/n/odCA0NLTZf97mAgADrp59+cs7TddHt+7e//c3c1vXT5QsODjb7ZEW2t2M76v6r++Ebb7xh9vno6Gi374NuF52v21D3Vf3u6HdNn6vfhcre5x3LqZ/fk08+abbNvn37nI/r+w0YMKDYNnD9Proun/5muX4G+t255pprrFdeecXsSw899JApp78XutzJycnWunXrzD6g87dt21bs909/axzrptuxQYMGZt6xY8e8un+3bt3aatmypXkt3Q/+/ve/m/3Cc11xYQg7OC/Hl72s6XxhR7/w119/fZnv8/LLL5vX0h8sV3v37jXz4+Li3OZ/9tlnZv4zzzxj7v/yyy9WUFCQNWzYMLdyepDTciWFnVtvvfW8668HDw11vXr1su66665iP6zXXXedVVRU5JyvIU7nDx482O114uPjzXxHgCuJvk6jRo1MwHB9zfz8fPPj2q1bt2LroAfg86lTp4510003lfh+rsHV9T379u1rNW7cuNjy6kFQX0+3t+tyaOh19fbbb5v5uv3ViRMnrPr16xc7+Ol76ja88cYbix0MNNSU5/M5fvy4CRV6MHPQ7VLSAfFClqNr167m8zh16pRzXl5ennn+hYQdx/Y9cuSItWDBAhMYHfttWet15swZE2A0ADiUd3vrQVg/J9d9Vv3rX/8q9n2YOnWqmaffKVcadPTg7QgelbHPe4Yd3UYasu+++26vhR2dp6HBQT8LDXY6f8eOHc75+seOBpOJEycW+/0rbTu+8MILXtu/dd11vm5PeBfNWCi3N998U7Zu3VpscjSnlOXGG2+UL7/80jQPaafYCxmJoqOElOfoLn1NrWp3VBFrdXRBQYEMHTrUrZz2l/AcLeZw9913lzh/4cKFpspfq/61D09gYKB5n7179xYr279/f9Mvw0GXSQ0YMMCtnGO+NheURpshfvzxR9Px2PU1tRpel1XXUZujvGXIkCFm3RyTjtRSp0+fNuurzRBaFa9NC45J11cf12VxNXjwYLf72kypHM1L2gH9l19+MZ2jXV9Pm0m0r4buSydOnDjv53P8+HF5+umnpWXLluaz0Um3jz63pM/HU3mXQye9rdtI9wMHbVbUJozy0tdxbN/IyEh55JFHTNPjiy++6FZOl0GbYLX5VEdt6Xrp/9qUWNJ6nW97a7Onfk4PPPCAW7lu3bqZZi1X2jFd31e/U670O6d/FHt2XPfmPu8pIiLCfL5///vf5bPPPhNv0CYjXWYH3ba6/2hzlmv/Qm2i0mY81yZRh9K2o+P3yRv7t76/NmdpM6E2PX/xxRduzYiouOrRExNVgv5wOfoauNK2+czMzDKfO23aNNPOnZKSYoKEv7+/3Hrrrabdv6TXdKUjhZT+MHnSdm3HD5OjnLZxeyppXmmvqT8y2o9n/Pjx8sc//tEcoHR5ddRSSQcd/YFypQeosubrAaii66o/fNp34EI7MjZt2rTEH3Dtk/Hss8+a2zfccIPbcugP9Z///GczleTIkSPFDlKutL+BOnXqlFv/lLJGIenBQvcTh5K2g/Yh0iCmn4cus/aNcBzMHO9VlvIuh76mbu+SOspeSOdZ7dPyySefmNvZ2dlmm2t/Gw0n2r/FYeLEiaY/lB7otdO49uPQQPHwww+XuF7n296Ofak8y69lS/qDQPc519eqjH2+JPHx8TJ//nyZMmWKpKeny8XS74trYHUsm+fyOuaXtLylbUfHtvHG/q37nO7bzz//vOlrpb9DuowatDQca9BGxRB2cEnoX1L6Y66TnkdHO/E988wzpsOgBqWyDt6OH/WsrCxp3Lix22NaC6JhxLWca6dPBz3IlPRjXtK5UjSQaYfLBQsWuM0v6fwf3ua6rp50XfXgV5ERU9rBWQ+k2rnXNVw6OkV60vfQgKc1TDoypiQXOorI8TlpeHIdnVRWKPX8fHJzc+WDDz6QGTNmuAUFrdHTA4k3l8MxOk/3HU8lzSuNfmau21w/i86dO5th8XoQa9KkiXO/0073WrvjGSq1Q39F96XSlt/1+6BlS9vnXLfZpaIBUTvx/u53vzMdxD05got+7mUFcG8qbTtqDZG39m+ltUXJycnmtnb6fvvtt822OHPmjPlDERVDMxYuOf3h1r9+9CCqByjHeVA8/zJ10JFQjoOBK60W1poWHamgunbtal5DT7TnSptbSqrVKI3+ADmWxeGrr75yGw1VWXS0mo7CWb58udsJ/rT6W6v1HSO0LtSTTz5pnqfbvDyhTcvqKCatRtcaCD1Ye06eNQvno6NS9LPX89+U9Ho6OWoCyvpsdLt4fj5//etfpaioyG1eaftTeZdD/wLXZp1Vq1a5/aWv2+/999+/oHX3XC4NnvqaOvrIdd0810sP9IcPH67Q++gBV0PBW2+95TZfm1s8vw/6HdLtoaODPJuudbl0X7jUxowZY2qTNdR6NuU4gpp+Lz1HMVaW0rajYxSkN/ZvTzo6TGte9fxYnp8NLgw1O7gktI9D+/btzRf+iiuuMD8SSUlJ5q8YHWqr9AutdDi6tntrHwc9+Oukf+HpX0z6V3K/fv1MQNJmDP2rWA/kSqt7teYoISHB1ExofxMdmq5/QWt1sWsfg7LomVq1+UprD7Q5QfvRaLWy1mRo005l0mXU6mv9i1+XY9y4ceavV23D1xoxx3DZC6U1ONp0okO0dTtrvxHtk6QHVx0avH79elPOdVi6fg7aH+u3v/2tKa8HGD3Q6xBpPdhf6AkItV+Nfob62WrI1cCr/SN0qL/259L/PWvTPOnyafOnbg/9S1qXSZs59C9hz9oP3d/UG2+8Yar/9cCvn6GGtPIuh+4H2t9Ca2O0SUEDlTa9ahAqb01SSXS/0ma3xYsXm4O5Lpd+3jpkWk+voAFz+/btZj09azPLS78DeqZmDVTaFKZD8bUWVWsJPJtk9DukwUb73Oi+rt9LDVp6qgL97Ms6WWVl0ZpFreXS77FrnySly69D7R3fdV1ebf7RYFpZtFbUdTvqsH/9w0T7IXpr/9bw9thjj5n30N9FDUf6PdP5rjWZqAAvd3iGDTlGI2zdurXEx3WUxPlGY82dO9eMJIqMjDTDQHWYrA6H/eGHH9yep0NBdfSLjlZxHW2hIxpeeuklM0w0MDDQvM6IESPMkHVX586dM6MjdBSRvs+1115rhsfqaAjX0RRljWQqKCiwJk+ebIbi6miWTp06mWGhuj6u6+kY+aGjyFyV9trn246u9P10JJC+v47m0ZFgOvqjPO9Tlu+++84M8dfhrTqkWUev6Trde++9Zqizbj9Xuo463Fa3hW53HcGin6NjBEpZy1HSyBilw2l1n9GRK/qa+tp63/X5rqNzPB06dMiM1KlXr54VFhZm3X777dbu3buL7XNKR7W0aNHCjLDxXJbyLIfS0xLofuTYb2fPnu1cvooOPVe7du0y+7kOg3aMntLvhI66CwkJMcPd9VQDOmqqpJGE5dne+nnqaR90iLTj+6DDoT1fUx04cMAaPny4FRERYbaH7iO6b7uOuqqsfb6sz1v3N33MdTSWysrKMkP49fPT0Vv6e6BDxksajVXSZ6Dr365du2LzPUd+OdZBT72gQ+svv/xy893R0XDffvttsedfzP6tpyIYPXq01aZNG7PMOnRfP7N58+a5nVIAF85P/6lISAKqCz1Lrf61rDU12k8IAFCzEHZgK1pdrM01OixUmzy0CUqbhXSo++7du0sdlQUAsC/67MBWtC+Ftq1rHw7t46LD4rUDoQ7bJOgAQM1EzQ4AALA1hp4DAABbI+wAAABbI+wAAABbo4OyiDk7p54WXU88VtLpuwEAQNWjZ8/Rk53qddzKOnEsYef/Xf/FcX0aAABQvehZrcs62zhhR8R5JVndWK6nywcAAFWXnkNNKyvOd0V4wo7LlWc16BB2AACoXs7XBYUOygAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYIOwAAwNYCfL0Adnfw4EE5cuSI27zIyEhp2rSpz5YJAICahLBTyUGndZtr5PSpk27z6wSHyL7/7CXwAABwCRB2KpHW6GjQiRg4SQIjmph5hUcz5egHc81j1O4AAGDzPjtnz56VZ599Vlq0aCHBwcFy5ZVXyvPPPy/nzp1zlrEsS2bOnCmNGjUyZXr06CF79uxxe52CggKZMGGCaR4KDQ2VwYMHy6FDh6Sq0KATFN3STI7QAwAAakDYeemll2ThwoUyf/582bt3r8yZM0defvll+fOf/+wso/MSExNNma1bt0p0dLT06dNH8vPznWXi4+Nl9erVkpqaKps2bZLjx4/LwIEDpaioyEdrBgAAqgqfNmP9+9//ljvuuEMGDBhg7jdv3lxWrFgh27Ztc9bqJCUlyfTp02XIkCFm3tKlSyUqKkqWL18u48aNk9zcXElOTpZly5ZJ7969TZmUlBRp0qSJbNiwQfr27evDNQQAADW6Zqd79+7y4YcfyjfffGPuf/nll6Zmpn///uZ+RkaGZGdnS2xsrPM5QUFBEhMTI5s3bzb3t2/fLoWFhW5ltMmrffv2zjKetNkrLy/PbQIAAPbk05qdp59+2tTMtGnTRvz9/U2z04svvij333+/eVyDjtKaHFd6/8CBA84ytWvXlnr16hUr43i+p4SEBHnuuecqaa0AAEBV4tOanZUrV5omJ22S2rFjh2mi+tOf/mT+d+Xn5+d2X5u3POd5KqvMtGnTTMhyTJmZmV5YGwAAUBX5tGbnqaeekqlTp8p9991n7nfo0MHU2GjNy6hRo0xnZKU1NA0bNnQ+Lycnx1nbo2XOnDkjx44dc6vd0TLdunUr8X21KUwnAABgfz6t2Tl58qTUquW+CNqc5Rh6rkPSNcykpaU5H9dgk56e7gwynTt3lsDAQLcyWVlZsnv37lLDDgAAqDl8WrMzaNAg00dHT67Xrl07+eKLL8ww8zFjxpjHtRlKh5XPmjVLWrVqZSa9HRISIsOHDzdlwsPDZezYsTJp0iSJiIiQ+vXry+TJk00tkWN0FgAAqLl8Gnb0fDq///3vJS4uzjQ76SgqHU7+hz/8wVlmypQpcurUKVNGm6q6du0q69evl7CwMGeZefPmSUBAgAwdOtSU7dWrlyxZssTUEgEAgJrNz9KevDWcDj3XGiLtrFy3bl2vva52utZmtuhRSebsyaoge79kL403Q+Y7derktfcCAKCmySvn8dunfXYAAAAqG2EHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYGmEHAADYmk/DTvPmzcXPz6/Y9Oijj5rHLcuSmTNnSqNGjSQ4OFh69Oghe/bscXuNgoICmTBhgkRGRkpoaKgMHjxYDh065KM1AgAAVY1Pw87WrVslKyvLOaWlpZn59957r/l/zpw5kpiYKPPnzzdlo6OjpU+fPpKfn+98jfj4eFm9erWkpqbKpk2b5Pjx4zJw4EApKiry2XoBAICqw6dh54orrjABxjF98MEHctVVV0lMTIyp1UlKSpLp06fLkCFDpH379rJ06VI5efKkLF++3Dw/NzdXkpOTZe7cudK7d2/p2LGjpKSkyK5du2TDhg2+XDUAAFBFVJk+O2fOnDFBZcyYMaYpKyMjQ7KzsyU2NtZZJigoyAShzZs3m/vbt2+XwsJCtzLa5KXByFGmJNr0lZeX5zYBAAB7qjJh591335Vff/1VRo8ebe5r0FFRUVFu5fS+4zH9v3bt2lKvXr1Sy5QkISFBwsPDnVOTJk0qYY0AAEBVUGXCjjZH9evXz9TMuNJaHlfavOU5z9P5ykybNs00gTmmzMzMi1x6AABQVVWJsHPgwAHTx+bhhx92ztM+PMqzhiYnJ8dZ26NltPnr2LFjpZYpiTaH1a1b120CAAD2VCXCzuLFi6VBgwYyYMAA57wWLVqYMOMYoaU02KSnp0u3bt3M/c6dO0tgYKBbGR3VtXv3bmcZAABQswX4egHOnTtnws6oUaMkIOD/XxxthtJh5bNmzZJWrVqZSW+HhITI8OHDTRntbzN27FiZNGmSRERESP369WXy5MnSoUMHMzoLAADA52FHm68OHjxoRmF5mjJlipw6dUri4uJMU1XXrl1l/fr1EhYW5iwzb948E5KGDh1qyvbq1UuWLFki/v7+l3hNAABAVeRnaW/eGk6HnmstkXZW9mb/nR07dpimtuhRSRIU3dLMK8jeL9lL482w+U6dOnntvQAAqGnyynn8rhJ9dgAAACoLYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANgaYQcAANiaz8PO4cOHZcSIERIRESEhISFy/fXXy/bt252PW5YlM2fOlEaNGklwcLD06NFD9uzZ4/YaBQUFMmHCBImMjJTQ0FAZPHiwHDp0yAdrAwAAqhqfhp1jx47JLbfcIoGBgbJmzRr5+uuvZe7cuXL55Zc7y8yZM0cSExNl/vz5snXrVomOjpY+ffpIfn6+s0x8fLysXr1aUlNTZdOmTXL8+HEZOHCgFBUV+WjNAABAVRHgyzd/6aWXpEmTJrJ48WLnvObNm7vV6iQlJcn06dNlyJAhZt7SpUslKipKli9fLuPGjZPc3FxJTk6WZcuWSe/evU2ZlJQU87obNmyQvn37+mDNAABAVeHTmp333ntPunTpIvfee680aNBAOnbsKIsWLXI+npGRIdnZ2RIbG+ucFxQUJDExMbJ582ZzX5u8CgsL3cpok1f79u2dZQAAQM3l07Dz/fffy4IFC6RVq1aybt06GT9+vDz++OPy5ptvmsc16CityXGl9x2P6f+1a9eWevXqlVrGk/bxycvLc5sAAIA9+bQZ69y5c6ZmZ9asWea+1uxo52MNQCNHjnSW8/Pzc3ueNm95zvNUVpmEhAR57rnnvLIOAACgavNpzU7Dhg2lbdu2bvOuueYaOXjwoLmtnZGVZw1NTk6Os7ZHy5w5c8Z0di6tjKdp06aZvj6OKTMz06vrBQAAqg6fhh0dibVv3z63ed988400a9bM3G7RooUJM2lpac7HNdikp6dLt27dzP3OnTub0VyuZbKysmT37t3OMp6030/dunXdJgAAYE8+bcZ68sknTSDRZqyhQ4fK559/Lm+88YaZlDZD6bByfVz79eikt/V8PMOHDzdlwsPDZezYsTJp0iRzrp769evL5MmTpUOHDs7RWQAAoObyadi54YYbzPlxtFnp+eefNzU5OtT8gQcecJaZMmWKnDp1SuLi4kxTVdeuXWX9+vUSFhbmLDNv3jwJCAgwgUnL9urVS5YsWSL+/v4+WjMAAFBV+Fnak7eG09FYWkOk/Xe82aS1Y8cO08wWPSpJgqJbmnkF2fsle2m8GTLfqVMnr70XAAA1TV45j98+v1wEAABAZSLsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAWyPsAAAAW/Np2Jk5c6b4+fm5TdHR0c7HLcsyZRo1aiTBwcHSo0cP2bNnj9trFBQUyIQJEyQyMlJCQ0Nl8ODBcujQIR+sDQAAqIp8XrPTrl07ycrKck67du1yPjZnzhxJTEyU+fPny9atW00Q6tOnj+Tn5zvLxMfHy+rVqyU1NVU2bdokx48fl4EDB0pRUZGP1ggAAFQlAT5fgIAAt9oc11qdpKQkmT59ugwZMsTMW7p0qURFRcny5ctl3LhxkpubK8nJybJs2TLp3bu3KZOSkiJNmjSRDRs2SN++fS/5+gAAABvU7GRkZHhtAb799lvTTNWiRQu577775Pvvv3e+R3Z2tsTGxjrLBgUFSUxMjGzevNnc3759uxQWFrqV0ddq3769swwAAKjZKhR2WrZsKT179jS1KKdPn67wm3ft2lXefPNNWbdunSxatMiEm27dusnRo0fNbaU1Oa70vuMx/b927dpSr169UsuURPv55OXluU0AAMCeKhR2vvzyS+nYsaNMmjTJNEFpk9Lnn39+wa/Tr18/ufvuu6VDhw6mGeof//iHs7nKQTstezZvec7zdL4yCQkJEh4e7py02QsAANhThcKONhNpx+HDhw/L4sWLTS1K9+7dTWdjnf/zzz9XaGF0NJUGH23acvTj8ayhycnJcdb2aJkzZ87IsWPHSi1TkmnTppn+Po4pMzOzQssLAABsPhpLOxffdddd8vbbb8tLL70k3333nUyePFkaN24sI0eONKOrLoQ2L+3du1caNmxo+vBomElLS3M+rsEmPT3dNHWpzp07S2BgoFsZfc/du3c7y5RE+/7UrVvXbQIAAPZ0UWFn27ZtEhcXZ8KJ1uho0NHA89FHH5lanzvuuKPM52t5DS/aGfmzzz6Te+65x/SfGTVqlGmG0mHls2bNMkPLNcCMHj1aQkJCZPjw4eb52gQ1duxY05z24YcfyhdffCEjRoxwNosBAABUaOi5Bhttvtq3b5/079/fdDLW/2vV+m920lqZ119/Xdq0aVPm6+jJ/+6//345cuSIXHHFFXLTTTfJli1bpFmzZubxKVOmyKlTp0yg0qYq7dC8fv16CQsLc77GvHnzTA3T0KFDTdlevXrJkiVLxN/fn08XAACIn6W9eS9Qq1atZMyYMfLQQw+VeI4cR5PTihUrTC1NVae1SVpLpP13vNmktWPHDtPUFj0qSYKiW5p5Bdn7JXtpvBk236lTJ6+9FwAANU1eOY/fFarZ0Q7E56NDwqtD0AEAAPZWoT472oT1zjvvFJuv81yHjQMAAFTLsDN79mxz4U1PDRo0MB2KAQAAqnXYOXDggOmE7Ek7Fh88eNAbywUAAOC7sKM1OF999VWJZ1aOiIjwxnIBAAD4LuzoBTsff/xx2bhxoxQVFZlJz63zxBNPmMcAAACqigqNxnrhhRdMU5ae00bPcaPOnTtnzppMnx0AAFDtw44OK1+5cqX88Y9/NE1XwcHB5qzFjpMBAgAAVOuw43D11VebCQAAwFZhR/vo6CUZ9HpUeoVxbcJypf13AAAAqm3Y0Y7IGnYGDBgg7du3NxftBAAAsE3YSU1Nlbfffttc/BMAAMB2Q8+1g3LLlv+9sCUAAIDtws6kSZPklVdekQpcMB0AAKDqN2Nt2rTJnFBwzZo10q5dOwkMDHR7fNWqVd5aPgAAgEsfdi6//HK56667Lu6dAQAAqmrYWbx4sfeXBAAAoKr02VFnz56VDRs2yOuvvy75+flm3o8//ijHjx/35vIBAABc+podvS7W7bffLgcPHpSCggLp06ePhIWFyZw5c+T06dOycOHCi1sqAAAAX9bs6EkFu3TpIseOHTPXxXLQfjx6VmUAAIBqPxrrX//6lznfjiu9EOjhw4e9tWwAAAC+qdnRa2Hp9bE8HTp0yDRnAQAAVOuwo310kpKSnPf12ljaMXnGjBlcQgIAAFT/Zqx58+ZJz549pW3btqZD8vDhw+Xbb7+VyMhIWbFihfeXEgAA4FKGnUaNGsnOnTtNsNmxY4dp1ho7dqw88MADbh2WAQAAfK1CYUdpqBkzZoyZAAAAbBV23nzzzTIfHzlyZEWXBwAAwPdhR8+z46qwsFBOnjxphqKHhIQQdgAAQPUejaUnE3SddCTWvn37pHv37nRQBgAA9rg2lqdWrVrJ7Nmzi9X6AAAA2CLsKH9/f3MxUAAAgGrdZ+e9995zu29ZlmRlZcn8+fPllltu8dayAQAA+Cbs3HnnnW739QzKV1xxhdx2220yd+7ci18qAAAAX18by3XS62RlZ2fL8uXLpWHDhhVakISEBBOa4uPj3WqMZs6caU5iqOf16dGjh+zZs8fteQUFBTJhwgRz9ubQ0FAZPHiwuUYXAACA1/vsVNTWrVvljTfekGuvvdZt/pw5cyQxMdE0j2mZ6Ohoc12u/Px8ZxkNR6tXr5bU1FRzNXYdGTZw4MASL1QKAABqngo1Y02cOLHcZTWslEXDiV5mYtGiRfLCCy+41eroxUanT58uQ4YMMfOWLl0qUVFRpgZp3LhxkpubK8nJybJs2TLp3bu3KZOSkiJNmjSRDRs2SN++fSuyegAAoKaHnS+++MJcE+vs2bPSunVrM++bb74xo7E6derkLKfNUufz6KOPyoABA0xYcQ07GRkZpmksNjbWOS8oKEhiYmJk8+bNJuxs377dnNDQtYw2ebVv396UKS3saNOXTg55eXkV2AoAAMC2YWfQoEESFhZmalrq1atn5unJBR966CH57W9/K5MmTSrX62jTk4YmbaLypEFHaU2OK71/4MABZxk9a7NjGVzLOJ5fWv+g5557rlzLCAAAamCfHR1xpYHBNWToba2ZKe9orMzMTHMCQm12qlOnTqnlPGuHtHnrfDVG5yszbdo00wTmmHRZAACAPVUo7Gizz08//VRsfk5Ojlvn4bJoE5SW79y5swQEBJgpPT1dXn31VXPbUaPjWUOjz3E8ph2Wz5w5Y2qVSitTEm0Oq1u3rtsEAADsqUJh56677jJNVn/729/MMG+d9PbYsWOdnYnPp1evXrJr1y7ZuXOnc+rSpYvprKy3r7zyShNm0tLSnM/RYKOBqFu3bua+BqXAwEC3Mnpyw927dzvLAACAmq1CfXYWLlwokydPlhEjRpgOwuaFAgJM2Hn55ZfL9Rra50c7ErvS8+REREQ45+uw8lmzZpnrbumkt/Wq6sOHDzePh4eHm/fUPkL6vPr165vl6tChg3N0FgAAqNkqFHY0cLz22msm2Hz33Xemj0zLli1NWPGmKVOmyKlTpyQuLs40VXXt2lXWr19vgpLDvHnzTNAaOnSoKas1RkuWLDEjwwAAAPwsTSoVtH//fhN2br31VnOG4/J0Hq6KtA+S1hJpZ2Vv9t/RkWba1BY9KkmColuaeQXZ+yV7abzps+Q6TB8AAFTO8btCfXaOHj1qalCuvvpq6d+/v+knox5++OFyDzsHAAC4FCoUdp588knTMfjgwYOmScth2LBhsnbtWm8uHwAAwKXvs6P9ZtatWyeNGzd2m6+diB0n/AMAAKi2NTsnTpxwq9FxOHLkiDmHDQAAQLUOO9oh+c0333Te107J586dM6Ozevbs6c3lAwAAuPTNWBpqevToIdu2bTMn+tMh4nv27JFffvlF/vWvf13cEgEAAPi6Zqdt27by1VdfyY033ih9+vQxzVp65mS9GvpVV13lzeUDAAC4tDU7esbk2NhYef3117lyOAAAsF/Njg4512tPVceTBwIAgJqnQs1YI0eOlOTkZO8vDQAAQFXooKydkv/617+aq43rlco9r4mVmJjoreUDAAC4dGHn+++/l+bNm5tmLMd1nb755hu3MjRvAQCAaht29AzJeh2sjRs3Oi8P8eqrr0pUVFRlLR8AAMCl67PjeYH0NWvWmGHnAAAAtuqgXFr4AQAAqNZhR/vjePbJoY8OAACwTZ8drckZPXq082Kfp0+flvHjxxcbjbVq1SrvLiUAAMClCDujRo1yuz9ixIiKvi8AAEDVCzuLFy+uvCUBAACoah2UAQAAqjrCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDXCDgAAsDWfhp0FCxbItddeK3Xr1jXTzTffLGvWrHE+blmWzJw5Uxo1aiTBwcHSo0cP2bNnj9trFBQUyIQJEyQyMlJCQ0Nl8ODBcujQIR+sDQAAqIp8GnYaN24ss2fPlm3btpnptttukzvuuMMZaObMmSOJiYkyf/582bp1q0RHR0ufPn0kPz/f+Rrx8fGyevVqSU1NlU2bNsnx48dl4MCBUlRU5MM1AwAAVYVPw86gQYOkf//+cvXVV5vpxRdflMsuu0y2bNlianWSkpJk+vTpMmTIEGnfvr0sXbpUTp48KcuXLzfPz83NleTkZJk7d6707t1bOnbsKCkpKbJr1y7ZsGGDL1cNAABUEVWmz47WxGjtzIkTJ0xzVkZGhmRnZ0tsbKyzTFBQkMTExMjmzZvN/e3bt0thYaFbGW3y0mDkKFMSbfrKy8tzmwAAgD35POxoLYzW5miQGT9+vGmSatu2rQk6Kioqyq283nc8pv/Xrl1b6tWrV2qZkiQkJEh4eLhzatKkSaWsGwAA8D2fh53WrVvLzp07TdPVI488IqNGjZKvv/7a+bifn59beW3e8pzn6Xxlpk2bZprAHFNmZqYX1gQAAFRFPg87WjPTsmVL6dKli6lxue666+SVV14xnZGVZw1NTk6Os7ZHy5w5c0aOHTtWapmSaC2SYwSYYwIAAPbk87BTUq2M9qlp0aKFCTNpaWnOxzTYpKenS7du3cz9zp07S2BgoFuZrKws2b17t7MMAACo2QJ8+ebPPPOM9OvXz/SZ0eHk2kH5448/lrVr15pmKB1WPmvWLGnVqpWZ9HZISIgMHz7cPF/724wdO1YmTZokERERUr9+fZk8ebJ06NDBjM4CAADwadj56aef5MEHHzS1MRpc9ASDGnT0XDpqypQpcurUKYmLizNNVV27dpX169dLWFiY8zXmzZsnAQEBMnToUFO2V69esmTJEvH39/fhmgEAgKrCz9J2oxpOh55r2NLOyt7sv7Njxw7T1BY9KkmColuaeQXZ+yV7abwZNt+pUyevvRcAADVNXjmP31Wuzw4AAIA3EXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICtEXYAAICt+TTsJCQkyA033CBhYWHSoEEDufPOO2Xfvn1uZSzLkpkzZ0qjRo0kODhYevToIXv27HErU1BQIBMmTJDIyEgJDQ2VwYMHy6FDhy7x2gAAgKrIp2EnPT1dHn30UdmyZYukpaXJ2bNnJTY2Vk6cOOEsM2fOHElMTJT58+fL1q1bJTo6Wvr06SP5+fnOMvHx8bJ69WpJTU2VTZs2yfHjx2XgwIFSVFTkozUDAABVRYAv33zt2rVu9xcvXmxqeLZv3y633nqrqdVJSkqS6dOny5AhQ0yZpUuXSlRUlCxfvlzGjRsnubm5kpycLMuWLZPevXubMikpKdKkSRPZsGGD9O3b1yfrBgAAqoYq1WdHg4uqX7+++T8jI0Oys7NNbY9DUFCQxMTEyObNm819DUaFhYVuZbTJq3379s4ynrTZKy8vz20CAAD2VGXCjtbiTJw4Ubp3726CitKgo7Qmx5Xedzym/9euXVvq1atXapmS+gqFh4c7J60FAgAA9lRlws5jjz0mX331laxYsaLYY35+fsWCkec8T2WVmTZtmqlFckyZmZkXufQAAKCqqhJhR0dSvffee7Jx40Zp3Lixc752RlaeNTQ5OTnO2h4tc+bMGTl27FipZTxpU1jdunXdJgAAYE8+DTta+6I1OqtWrZKPPvpIWrRo4fa43tcwoyO1HDTY6Ciubt26mfudO3eWwMBAtzJZWVmye/duZxkAAFBz+XQ0lg4711FV//d//2fOteOowdF+NHpOHW2G0mHls2bNklatWplJb4eEhMjw4cOdZceOHSuTJk2SiIgI07l58uTJ0qFDB+foLAAAUHP5NOwsWLDA/K8nCvQcgj569Ghze8qUKXLq1CmJi4szTVVdu3aV9evXm3DkMG/ePAkICJChQ4easr169ZIlS5aIv7//JV4jAABQ1QT4uhnrfLR2R8+grFNp6tSpI3/+85/NBAAAUOU6KAMAAFQWwg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1wg4AALA1n4adTz75RAYNGiSNGjUSPz8/effdd90etyxLZs6caR4PDg6WHj16yJ49e9zKFBQUyIQJEyQyMlJCQ0Nl8ODBcujQoUu8JgAAoKryadg5ceKEXHfddTJ//vwSH58zZ44kJiaax7du3SrR0dHSp08fyc/Pd5aJj4+X1atXS2pqqmzatEmOHz8uAwcOlKKioku4JgAAoKoK8OWb9+vXz0wl0VqdpKQkmT59ugwZMsTMW7p0qURFRcny5ctl3LhxkpubK8nJybJs2TLp3bu3KZOSkiJNmjSRDRs2SN++fS/p+gAAgKqnyvbZycjIkOzsbImNjXXOCwoKkpiYGNm8ebO5v337diksLHQro01e7du3d5YBAAA1m09rdsqiQUdpTY4rvX/gwAFnmdq1a0u9evWKlXE8vyTaz0cnh7y8PC8vPQAAqCqqbM2Og3Zc9mze8pzn6XxlEhISJDw83DlpsxcAALCnKht2tDOy8qyhycnJcdb2aJkzZ87IsWPHSi1TkmnTppn+Po4pMzOzUtYBAAD4XpUNOy1atDBhJi0tzTlPg016erp069bN3O/cubMEBga6lcnKypLdu3c7y5RE+/7UrVvXbQIAAPbk0z47Okx8//79bp2Sd+7cKfXr15emTZuaYeWzZs2SVq1amUlvh4SEyPDhw015bYIaO3asTJo0SSIiIszzJk+eLB06dHCOzgIAADWbT8POtm3bpGfPns77EydONP+PGjVKlixZIlOmTJFTp05JXFycaarq2rWrrF+/XsLCwpzPmTdvngQEBMjQoUNN2V69epnn+vv7+2SdAABA1eJnaW/eGk5HY2ktkfbf8WaT1o4dO0xTW/SoJAmKbmnmFWTvl+yl8WbYfKdOnbz2XgAA1DR55Tx+V9k+OwAAAN5A2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALZG2AEAALYW4OsFAGAPBw8elCNHjrjNi4yMlKZNm/psmQBAEXYAeCXotG5zjZw+ddJtfp3gENn3n70EHgA+RdgBcNG0RkeDTsTASRIY0cTMKzyaKUc/mGseo3YHgC8RdgB4jQadoOiWbFEAVQodlAEAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK0RdgAAgK1xuQgf2bt3r9t9rg4NAEDlIOxcYkXHj4n4+cmIESPc5nN1aAAAKodtmrFee+01adGihdSpU0c6d+4sn376qVRF5wqOi1iWuTp09KgkM+ltvWK0Xh0aAAB4ly1qdlauXCnx8fEm8Nxyyy3y+uuvS79+/eTrr7+Wpk2bil2uDn3w4EG3QETTFwAANSTsJCYmytixY+Xhhx8295OSkmTdunWyYMECSUhIEDvQoNO6zTWmBshXTV+ELQBAdVTtw86ZM2dk+/btMnXqVLf5sbGxsnnzZqmunZY9a220RkeDjjZ5aa1Q4dFMOfrBXDPftVxFAonnc0p6XlUIW4Arwjcqg132K7ush7dU+7CjH2ZRUZFERUW5zdf72dnZJT6noKDATA65ubnm/7y8PK8u2/Hjx//7ftn75dyZ0+a2hhTPeQU//jfkuHZarh1UR1KWvelcr3379pn/zxUWmOfp/0qDnuN9fvrpJxnx4Eg5U3C61NdRtWrVknPnzpX6nNLeX4NO3RuGiH/4FVKU+7PkbV1latBat25d4muXdL8yy/j6/WvyMjr2T7d9/ZdDxfZRby1jRfb16rAdq2KZmrSMlblfXcptVNH1qMxljI6ONpO3OY7blmWVXdCq5g4fPqxraG3evNlt/gsvvGC1bt26xOfMmDHDPIeJbcA+wD7APsA+wD4g1X4bZGZmlpkVqn3NjlbN+fv7F6vFycnJKVbb4zBt2jSZOHGi876mz19++UUiIiLEz8/vohJmkyZNJDMzU+rWrVvh1wHbuiphv2Zb2xH7tT22tdbo5OfnS6NGjcosV+3DTu3atc1Q87S0NLnrrruc8/X+HXfcUeJzgoKCzOTq8ssv99oy6YdJ2Lk02NaXDtuabW1H7NfVf1uHh4eft0y1DztKa2kefPBB6dKli9x8883yxhtvmM5Z48eP9/WiAQAAH7NF2Bk2bJgcPXpUnn/+ecnKypL27dvLP//5T2nWrJmvFw0AAPiYLcKOiouLM5MvadPYjBkzijWRgW1dnbFfs63tiP26Zm1rP+2l7LN3BwAAqGS2uTYWAABASQg7AADA1gg7AADA1gg7AADA1gg7XvLaa69JixYtpE6dOuYkh59++qm3XrrG0ivW33DDDRIWFiYNGjSQO++803kNJgftXz9z5kxz9szg4GDp0aOH7Nmzx2fLbKdtr2cTj4+Pd85jW3vP4cOHzbXw9KztISEhcv3115triLGtvevs2bPy7LPPmt9m/X248sorzSlKXK/ZxH5dcZ988okMGjTI/P7q78W7777r9nh5tq1ep3LChAnmagihoaEyePBgOXTov9fV8ypvXqeqpkpNTbUCAwOtRYsWWV9//bX1xBNPWKGhodaBAwd8vWjVWt++fa3Fixdbu3fvtnbu3GkNGDDAatq0qXX8+HFnmdmzZ1thYWHW3//+d2vXrl3WsGHDrIYNG1p5eXk+Xfbq7PPPP7eaN29uXXvttWZfdmBbe8cvv/xiNWvWzBo9erT12WefWRkZGdaGDRus/fv3s629TK+RGBERYX3wwQdmO7/zzjvWZZddZiUlJbGtveCf//ynNX36dPP7q3Fi9erVbo+X5zdj/Pjx1m9+8xsrLS3N2rFjh9WzZ0/ruuuus86ePWt5E2HHC2688Ubzgblq06aNNXXqVG+8PP6fnJwc84VKT08398+dO2dFR0ebL5TD6dOnrfDwcGvhwoVstwrIz8+3WrVqZX54YmJinGGHbe09Tz/9tNW9e/dSH2dbe4/+gTRmzBi3eUOGDLFGjBjBtvYyz7BTnv34119/NRUFWmHgenHvWrVqWWvXrvXq8tGMdZHOnDljqp9jY2Pd5uv9zZs3X+zLw0Vubq75v379+ub/jIwMcwFY122vJ62KiYlh21fQo48+KgMGDJDevXu7zWdbe897771nLm1z7733mubZjh07yqJFi9jWlaB79+7y4YcfyjfffGPuf/nll7Jp0ybp37+/uc9+XXnKs2312FlYWOhWRpu89CoI3j5+2uYMyr5y5MgRKSoqKnaFdb3veSV2VJz+4aDXQNMfL/0iKMf2LWnbHzhwgM19gVJTU2XHjh2ydevWYo+xrb3n+++/lwULFpj9+ZlnnpHPP/9cHn/8cXMgGDlyJNvai55++mnzR1KbNm3E39/f/Fa/+OKLcv/995vH2a8rT3m2rZbRi3nXq1ev0o+fhB0v0c5Zngdnz3mouMcee0y++uor81cZ2977MjMz5YknnpD169ebTvalYT+/eNo5Vmt2Zs2aZe5rzY522tQApGGHbe09K1eulJSUFFm+fLm0a9dOdu7caTrda+3BqFGj2NaXQEV+Myrj+Ekz1kXSHuT6F4NnCs3JySmWaFEx2lNfq/43btwojRs3ds6Pjo42/7PtL55WJ+s+qyMJAwICzJSeni6vvvqque3Yl9nWF69hw4bStm1bt3nXXHONHDx40Nxmv/aep556SqZOnSr33XefdOjQQR588EF58sknzWhDtnXlKs9+rGW0K8ixY8dKLeMthJ2LpFVweoBIS0tzm6/3u3XrdrEvX6NputcanVWrVslHH31kho+60vv6ZXHd9vrF0YM02/7C9OrVS3bt2mX+8nVMWvvwwAMPmNs6ZJdt7R233HJLsVMoaJ+SZs2amdvs195z8uRJqVXL/TCnf5w6hp6zrStPebatHjsDAwPdymRlZcnu3bu9/xvu1e7ONXzoeXJyshl6Hh8fb4ae//DDD75etGrtkUceMT33P/74YysrK8s5nTx50llGe/prmVWrVpmhjffffz9Dz73EdTQW29q7Q/sDAgKsF1980fr222+tt956ywoJCbFSUlLY1l42atQoM6zZMfRcfyciIyOtKVOmsK29NHrziy++MJPGicTERHPbcdqV8vw+60jmxo0bm9Mv6NDz2267jaHnVdlf/vIXc+6M2rVrW506dXIOj0bF6ZenpEnPveM6vHHGjBlmiGNQUJB16623mi8VvB922Nbe8/7771vt27c3+6yepuKNN95we5xt7R16UNV9WM/PVadOHevKK68054UpKChgW3vBxo0bS/yN1pBZ3v341KlT1mOPPWbVr1/fCg4OtgYOHGgdPHjQ8jY//ce7dUUAAABVB312AACArRF2AACArRF2AACArRF2AACArRF2AACArRF2AACArRF2AACArRF2APhM8+bNJSkpqUZ+Anqhw3fffdfXiwHUCIQdoAYbPXq0OejqpBf8bNq0qTzyyCPFLsxnNzNnzjTrPH78eLf5eh0wnf/DDz/4bNkAeB9hB6jhbr/9dnPxPT3A//Wvf5X3339f4uLixO7q1KkjycnJ5iKcdqEXWgRQHGEHqOGCgoLM1YkbN24ssbGxMmzYMFm/fr3z8aKiIhk7dqy5inFwcLC0bt1aXnnllWI1RHfeeaf86U9/koYNG0pERIQ8+uijUlhY6CyTk5MjgwYNMq+hr/XWW28VW5aDBw/KHXfcIZdddpnUrVtXhg4dKj/99JNbjcz1118v//u//2tqobSc1kTpMs6ZM8esR4MGDeTFF18873rrevTs2VOeffbZUsssWbJELr/8crd52vSktT/eWCYNmf369XNuk3feecft8cOHD5vPo169emab6rZxrXVybPeEhARp1KiRXH311eddb6AmCvD1AgCoOr7//ntZu3atBAYGOuedO3fOBKG3335bIiMjZfPmzfK73/3OhBoNIw4bN2408/T//fv3m4O0hoD/+Z//cR6YMzMz5aOPPpLatWvL448/bgKQg16mTw/coaGhkp6eLmfPnjU1TPo6H3/8sbPcd999J2vWrDHLqbfvueceycjIMAd6fZ4u35gxY6RXr15y0003lbm+s2fPlhtuuEG2bt1q/q+oii7T73//e7MMGh6XLVsm999/v7Rv316uueYaOXnypAljv/3tb+WTTz4xzYwvvPCCqYn76quvzDZUH374oQmGaWlpZhsCKIHXLy0KoNrQqxP7+/tboaGh5qrQjqsWJyYmlvm8uLg46+6773Z7nWbNmllnz551zrv33nutYcOGmdv79u0zr7tlyxbn43v37jXz5s2bZ+6vX7/eLIvrFY/37Nljynz++efmvl5BOSQkxFzN2qFv375W8+bNraKiIue81q1bWwkJCaUuv77OddddZ27fd9991m233WZuf/HFF+b9MjIyzP3Fixdb4eHhbs9dvXq1KeP6WhVZJn2N8ePHu712165drUceecTcTk5ONs/RK0c76NW69crQ69atc273qKgot6t4AyiOmh2ghtPagwULFpiaBO2zo31YJkyY4FZm4cKF5rEDBw7IqVOnTN8QrbVx1a5dO/H393fe11qeXbt2mdt79+41NRNdunRxPt6mTRu3JiIt06RJEzM5tG3b1pTRxxw1LzqCKywszFkmKirKvG+tWrXc5rnWGpVFa0u0JkWb7rS5qSIqukw333xzsfvaSVpt377d1JC5vq46ffq0qT1y6NChg7OWB0DJ6LMD1HDabNSyZUu59tpr5dVXX5WCggJ57rnnnI9r89WTTz5pmmE0EOjB+KGHHirWGda16UtpvxZtAlOO5hXXvi6etExJj3vOL+l9ynrv87nqqqtMU9vUqVOLNQNpWPGc59oPqTKWybGuWrZz585me7tOGkaHDx/u9vkBKBthB4CbGTNmmI7GP/74o7n/6aefSrdu3Uz/mY4dO5pg5FqzUB5ac6J9cLZt2+act2/fPvn111/danG0g7L263H4+uuvJTc31zy/Mv3hD38wISI1NdVt/hVXXCH5+fly4sQJ5zxHzYs3bNmypdh9rfFSnTp1km+//dbUNuk2d53Cw8O9tgxATUDYAeCmR48epklq1qxZ5r4eXDWkrFu3zgQC7VSrHXovhI580o61WoPy2WefmSaahx9+2IxCcujdu7epXXrggQdkx44d8vnnn8vIkSMlJibGrfmrMmgT08SJE03NlquuXbtKSEiIPPPMM6ZJafny5WaElrfo6CsdxaXbVUOmrvNjjz1mHtPtoB3CdQSWBk7t8KydnZ944gk5dOiQ15YBqAkIOwCK0QP/okWLTC2LnnhvyJAhZlSUHvyPHj1aofPwLF682PTH0fCir6cjulz7yDjOKKzDrG+99VYTfq688kpZuXLlJfmEnnrqKTNs3FX9+vUlJSVF/vnPf5q+MStWrDBDzb1Fmwu1NklD3tKlS81wfK3hUhqydBSWDmfX7aW1W9qUqH2mdPQVgPLz017KF1AeAACgWqFmBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAA2BphBwAAiJ39f+igB7iuGXEjAAAAAElFTkSuQmCC", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plt.hist(random_numbers, bins=range(1, 101), edgecolor='black')\n", "plt.xlabel('Random Number')\n", "plt.ylabel('Frequency')\n", "plt.title('Histogram of Generated Random Numbers')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most of the time it chose to output 1 or 42, which is not surprising since LLMs are not designed to generate truly random numbers. Their output reflects the patterns in the training data, and they may have learned that certain numbers are more common or more likely to be generated in response to certain prompts. To generate truly random numbers, it is better to use a dedicated random number generator tool or library, such as the random module in Python, which uses a pseudorandom number generator algorithm to produce random numbers that are uniformly distributed.\n", "\n", "However, we can give LLMs the ability to use external tools to perform tasks that they are not designed for. For example, we can create a tool that generates random numbers and then allow the LLM to call this tool as part of its response generation process. This way, we can leverage the capabilities of the LLM for natural language understanding and generation, while using the external tool for generating truly random numbers.\n", "\n", ":::{.callout-note}\n", "Note: we show the following code for illustration only, as the small model we use does not support tool calling.\n", ":::\n", "\n", "```python\n", "import random\n", "\n", "# The actual Python function that generates a random number\n", "def random_number_tool():\n", " return random.randint(1, 100)\n", "\n", "# The tool schema that tells the LLM what tools are available and how to call them\n", "tools = [\n", " {\n", " \"type\": \"function\",\n", " \"function\": {\n", " \"name\": \"random_number_tool\",\n", " \"description\": \"Generate a uniformly distributed random number between 1 and 100\",\n", " \"parameters\": {\n", " \"type\": \"object\",\n", " \"properties\": {},\n", " \"required\": []\n", " }\n", " }\n", " }\n", "]\n", "```\n", "\n", "Note that the tool schema only describes the tool to the LLM. It does not contain the actual Python function. It is the LLM that decides whether to call the tool based on the schema, and it is up to our code to execute the actual function and return the result to the LLM. We can now define a function that sends a request to the LLM API, checks if the model wants to call a tool, executes the tool if needed, and sends the result back to the LLM to generate the final response.\n", "\n", "```python\n", "def generate_random_number_with_tool():\n", "\n", " messages = [\n", " {\"role\": \"system\", \"content\": \"You are a professional random number generator. Use the random_number_tool to generate a uniformly distributed random number between 1 and 100. Only reply with the number.\"},\n", " {\"role\": \"user\", \"content\": \"Generate a random number now.\"},\n", " ]\n", "\n", " # First API call: the model sees the available tools and decides whether to use one\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = messages,\n", " tools = tools\n", " )\n", "\n", " # Check if the model requested a tool call\n", " if response.choices[0].message.tool_calls:\n", "\n", " print(\"Model requested a tool call. Executing the tool...\")\n", "\n", " tool_call = response.choices[0].message.tool_calls[0]\n", "\n", " # Execute the actual Python function\n", " result = random_number_tool()\n", "\n", " # Append the assistant's tool request and the tool result to the conversation\n", " messages.append(response.choices[0].message)\n", " messages.append({\n", " \"role\": \"tool\",\n", " \"tool_call_id\": tool_call.id,\n", " \"content\": str(result)\n", " })\n", "\n", " # Second API call: the model generates a final response using the tool result\n", " response = client.chat.completions.create(\n", " model = model,\n", " messages = messages,\n", " tools = tools\n", " )\n", "\n", " return response.choices[0].message.content\n", "\n", "```\n", "\n", "With this setup, the LLM can now generate truly random numbers by calling the external tool, while still leveraging its natural language understanding and generation capabilities for the rest of the conversation. Unfortunately, Gemma 3 (270m) is not able to use tools, but larger and more powerful models can learn to use tools effectively, which can significantly expand their capabilities and allow them to perform a wider range of tasks." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3", "path": "/usr/local/share/jupyter/kernels/python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.12" } }, "nbformat": 4, "nbformat_minor": 4 }