Skip to main content

5 posts tagged with "telegram"

View All Tags

· 5 min read

-Make a personal Japanese tutor in a few minutes

This is an article by Chase Zhang, Twitter @ant_sz

Over the past year, Large Language Models (LLMs) have seen burgeoning growth and development. As a data systems enthusiast, it’s essential to explore and study this hot field to stay updated. This article summarizes my recent experience in developing LLM applications using flows.network under Rust.

Concepts around Large Language Models

When talking about LLMs, ChatGPT and OpenAI immediately come to mind. Despite OpenAI's recent change in CEO, their significant role in advancing and promoting LLMs is undeniable. From the perspective of individual developers and small companies, training and deploying LLMs is impractical. As a data system developer, I prefer to represent the concept of LLMs as follows:

Concept of LLM

From an application developer's perspective, LLMs can be viewed as functions composed of massive training data. Since this function is static, it requires Prompts and Context as inputs during use. The function's role is to generate outputs most likely to meet user needs based on the Prompt and Context. This output, together with the user’s input (Chat), is fed back into the LLM as new Context. LLM Concept This mindset allows LLM developers to focus less on the internal details of LLMs and more on preparing inputs for this “function.” Of course, there are nuances in prompt-tuning, but today's LLMs are remarkably “smart” and often produce effective results with simple prompts.

RAG and Vector Store

RAG (Retrieval Augmented Generation) and Vector Store are essential in LLM applications. RAG, a term that emerged with the rise of LLMs, addresses the challenge of LLMs in quickly utilizing new knowledge. For example, ChatGPT has long been limited to answering questions based on pre-2021 knowledge. While Fine-tuning can solve this, it's a costly and slow process.

Initially, RAG utilized Vector Stores for information extraction, primarily in Q&A systems. Traditional keyword-based query systems often fail to return relevant results for natural language queries. Thus, Vector Stores, combined with a language model, compute Embeddings (vector data). These stores then use similarity calculations to extract relevant information as Context for LLMs. The traditional Vector Store-based RAG process is depicted below:

Usage of Vector Store Vector Store In this framework, the Embedding Model converts both user queries and existing documents into vectors within the same metric space, ensuring similarity. When calling the LLM, the extracted information serves as Context, allowing the LLM to answer various questions, including those on newly added documents. However, this framework’s capability can sometimes be limited by the performance of the Embedding Model.

RAG and Assistant

A more powerful RAG framework, based on the so-called Assistant model, has recently gained popularity. The Assistant acts as a third agent-like role in the interaction between the LLM and the user, similar to a chatbot. It can accept commands output by the LLM, execute queries, and return results as Context. This model is illustrated below:

Assistant Framework Assistant This model is based on a simple yet effective idea: LLMs are powerful enough to translate user questions into formatted statements, such as SQL or JSON-described function calls. If LLMs first output commands they believe can retrieve data, and an Agent executes these queries and returns the results to the current session’s Context, the LLM can access the latest information. This flexible approach allows querying traditional databases without pre-processing with an Embedding model. Moreover, LLMs can conduct multiple rounds of queries before returning results, making this a highly powerful framework.

Additionally, the Assistant framework can control external systems, not just data queries, significantly expanding the potential applications of LLMs.

Development of LLM Applications

As average developers, we might feel powerless regarding significant framework developments in LLMs. However, we can still build useful tools using available resources. Currently, the most popular LLM framework is perhaps LangChain. Essentially, these tools are workflow management tools that allow users to weave their application logic into executable workflows.

Here, I chose flows.network, a Rust-implemented serverless LLM workflow platform, to develop a Telegram GPT application. It offers various callable methods and tools to help outline your LLM application flow. Your application is compiled into WebAssembly and runs on the hosted platform provided by Flows.Network, which is currently free.

Flows.Network Supported Integrations

The above image shows the integrations provided by Flows.Network. These interfaces are implemented as APIs with callable Rust SDKs. Users need only focus on organizing application logic and calling these methods. One reason for choosing this over LangChain is its lightweight framework, which avoids widely complained heavy abstractions in LangChain.

I used Flows.Network to create a Telegram Bot for Q&A and learning Japanese, which you can find at https://github.com/shanzi/Telegram-ChatGPT. The Bot supports simple commands and maintains conversation context with Reply-To:!

Q&A Mode Screenshot from 2023-11-18 17-33-23 Japanese Learning Mode Screenshot from 2023-11-18 17-35-12 Notably, to handle Telegram’s peculiar Markdown syntax, I wrote a parser using Nom to process and escape Markdown syntax in ChatGPT’s output. See the src/markdown.rs file for details.

Currently, Flows.Network requires WebAssembly for building, which hinders local execution and debugging. This can be a drawback, but proper configuration allows automatic build and deployment upon pushing to your GitHub repository, providing a convenient writing experience, which compensate for this drawback.

Conclusion

This article introduced general concepts of LLM applications and shared my experience using Flows.Network to develop an LLM application. I personally enjoy writing LLM applications in Rust, and Flows offers needed APIs, freeing me from dealing with various interfaces. The convenience of deploying FaaS with Wasm is also commendable. I hope to see more platforms like this emerge to facilitate the development of diverse GPT applications.

Feel free to Fork and try out my Telegram Bot: https://github.com/shanzi/Telegram-ChatGPT


· 3 min read

Introduction

Telegram is a popular messaging platform that allows users to communicate with individuals or groups through chats. In some cases, you may need to find the Chat ID or Group Chat ID for your Telegram conversations. In this guide, we will walk you through the process of finding the Telegram Chat ID and Group Chat ID.

For a Telegram bot token, please check out How to get a Telegram bot token.

The chat ID and group chat ID is not easy to get. The official Telegram bot @botfather doesn’t provide any information about chat ID and group chat ID. However, the telegram bot ecosystem is prosperous. We can solve this problem with a bot called Get My ID. After trying several bots, I found this bot the easiest one.

Next, let’s learn some basic concepts. Telegram has three different kinds of chat IDs, one is for a telegram bot to send DM, which is simply called chat ID. The second one is for a telegram bot to send and reply to messages in a group, which is called group chat ID. The last one is for a telegram bot to send messages in a channel, which is called channel chat ID. The telegram channel is more complicated and we will cover this part in another independent article.

What’s the difference between a Telegram group and a channel? The Telegram group allows the members here to send and reply to messages from each other. While the Telegram channel is more like a notification system. Only the designated users can send messages in the channel. The subscriber of the channel only can read the messages or react the messages with emojis.

Despite their distinctions, all three types of ID are unified under the umbrella term chat_id in Telegram's API documentation. However, they assume distinct roles and are made up of different parameters. Therefore, it is crucial to specify the type of Telegram bot you intend to develop.

How to Find Your Telegram Chat ID

To obtain your chat ID, simply send a direct message to the Get My ID bot, and the bot will promptly respond with your user ID and Current chat ID. Both values are identical, so it matters not which one you choose to copy and paste. This ID is needed for a bot to interact via direct messages.

How to Find the Telegram Group ID

To get a group's chat ID, invite the Get My ID bot to the group. Once it joins, send a message in the group chat. The bot will respond with your user ID and Current chat ID. The Current Chat ID, starting with a hyphen (-), is the group chat ID. Note that if the Get My ID bot is present in multiple groups, the “Current chat ID” will alter accordingly. Therefore, exercise caution to ensure that you invite the Get My ID bot to the intended group.

In summary, message the Get My ID bot for your personal chat ID, and invite it to a group for the group chat ID. We will introduce how to get a channel in the next article. Please stay tuned!

Next, you can build a Telegram bot to summarize and send the Hacker News Post that you're interested in.

· 4 min read

Recently, flows.network supports Anthropic's Claude and its newly issued Claude 2 (100k tokens supported). We talked about how to build a Telegram ChatGPT bot. In this guide, I will walk you through the process of building your own Telegram chatbot powered by Claude in just three minutes using flows.network. You won't need to write any code or manage any servers. All you have to do is click a few buttons and provide some information.

With this Claude Telegram chatbot, you can:

  • Customize prompts to prompt Claude based on your needs
  • Chat privately and in a public group
  • Optionally set the maximum token length for your Claude model in the code

Before we start, let’s learn some basic concepts of Claude and the Telegram bot.

What is Claude

Claude is an AI assistant created by Anthropic to be helpful, harmless, and honest. It was designed using a technique called Constitutional AI, which trains language models like Claude through natural conversations. During these conversations, humans provide feedback to reinforce positive behavior and ensure safety and transparency.

By building your own Claude bot on Telegram, you contribute to the development of AI that prioritizes trustworthy and reliable interactions. Your bot's users will experience Claude as a helpful and honest AI partner, designed and trained to uphold those values above all else.

If you're new to Telegram bots, you can refer to our previous article which introduced telegram bot.

Create a Claude Telegram Bot from a Flow Template in Three Steps

The first step is to load the pre-built template for creating a Claude Telegram bot on your browser. Before you click on Create and Build button, you can review the three optional variables like the following image. Here I want to highlight the system_prompt variables. This is for prompting Claude. You can type any prompts here.

The template contains the source code for the bot itself. We will clone the source code to your own GitHub account so that you can modify and customize it later.

Once you have made your customizations, click the Create and Build button, and you will be directed to another page to configure the Claude integration. Next, you need to add your Claude API key. Click on Connect and enter your key.

If you don’t have one, apply here.

Then, you need to add the Telegram token. You can follow this article to get a Telegram token from @botfather. Once you're done, the grey Deploy button will turn purple. Click on the Deploy button in purple to complete the flow.

That's it! Once the function is ready and the flow's status is "running," you can test your own Claude Telegram bot. You can explore different roles for your bot, such as polishing English writing, serving as a writing tutor, or explaining code, by using different prompts.

If you'd like to check out the source code,

Access External Web Service

The bot's flow function can access the web, enabling Telegram to utilize up-to-date information and web services in conjunction with Claude. In simpler terms, you have the ability to incorporate plugin-like functionalities of Claude into your own bot.

For example, you can make HTTPS requests to an external web service to look up the current weather and parse the result from the response JSON data.

fn get_weather(city: &str) -> Result<ApiResult, String> {
let mut writer = Vec::new();
let api_key = std::env::var("API_KEY").unwrap();
let query_str = format!(
"https://api.openweathermap.org/data/2.5/weather?q={city}&units=metric&appid={api_key}"
);

request::get(query_str, &mut writer)
.map_err(|e| e.to_string())
.and_then(|_| {
serde_json::from_slice::<ApiResult>(&writer).map_err(|_| {
"Please check if you've typed the name of your city correctly".to_string()
})
})
}

Combining Claude with the web, your Telegram bot will be just like a ChatGPT Plugin and leverage conversational UI for complex applications. Calling external web services provides more dynamic and real-world functionality. For example, you can call the SendGrid API to send emails based on ChatGPT outputs.

References:

· 7 min read

ChatGPT has taken the world by storm. However, it's desktop web browser UI is cumbersome and delivers a subpar conversation experience for users. Users already have messaging apps they love and use. We should bring ChatGPT to messaging apps and enable users to converse with ChatGPT anytime anywhere.

Telegram is a popular messaging app that allows users to securely send messages, files, and media to individuals or groups. In this article, I will walk through how to build a ChatGPT bot on Telegram. I will further discuss how to customize your bot with your own prompts and how to access external services from the bot (ie similar to ChatGPT plugins in the desktop web UI).

The bot is deployed on flows.network, a serverless platform for automating SaaS workflows with AI workloads.

What is a telegram bot

Before we started, let’s first understand how a telegram bot works. If you are familiar with Telegram bots, you can skip this part and go to the next part to create a ChatGPT telegram bot.

Essentially, a Telegram bot is a software application that runs inside the Telegram app. It allows us to interact with it using text messages or commands. Telegram bots are built using APIs, which enable developers to create custom code to handle different types of messages and perform various actions.

You can use telegram bot to automate repetitive receptive tasks, such as send a welcome message to the new member. You can also integrate telegram with other SaaS. This is what exactly we are doing right now — integrate Telegram with ChatGPT.

Create a general ChatGPT Telegram bot from a flow template in 3 minutes

With ChatGPT's integration on Telegram, you can easily communicate with ChatGPT without the need to open an additional browser. This feature provides a convenient way to seek assistance from ChatGPT using Telegram's interface.

Make sure you have signed up for an account for flows.network.

  1. Load the ChatGPT based Telegram bot template and click on the Create and Build button. The template contains the source code for the bot itself. flows.networl will clone the source code to your own GitHub account so that you can modify and customize it later. We will use the code to combine awesome prompts with this telegram bot.

get the source code for the ChatGPT Telegram bot

  1. Give the bot your OpenAI API key. Click the Connect button to add your OpenAI API key. If you have saved API keys in the past, you can skip this step and reuse these keys.

get the source code for the ChatGPT Telegram bot

  1. Configure the Telegram API token. This is to connect the function with your bot. You can get a telegram API token from @botfather and paste the box here. Once you're done, the grey Deploy button will turn purple.

get the source code for the ChatGPT Telegram bot

That’t it. After the function is ready and the status of flow is running, you can give your own ChatGPT Telegram bot a try. You can ask this ChatGPT Telegram bot to polish your english writing, being a writing tutor, and explain the code with different prompts.

get the source code for the ChatGPT Telegram bot

Advanced: Import awesome ChatGPT prompts to your telegram bot

The ChatGPT prompts marketplace is another hot topic with the fire of ChatGPT. Prompting is critical for getting more accurate and efficient output from the language model. This is particularly important in fields such as question-answering, language translation, and chatbot development, where the output quality is directly related to the accuracy of the prompts used. A well-written prompt is an essential tool for improving the accuracy, efficiency, and usability of artificial intelligence systems in a wide range of applications. We have seen lots of ChatGPT prompts marketplace on the market, like the open-sourced awesome-chatgpt-prompts GitHub repo, AwesomeChatGPT, FlowGPT and so on.

In this section, I will show you how to add your favourite prompts to the ChatGPT Telegram bot you just built. The customized ChatGPT Telegram bot can be fine-tuned for a specific task. Here I use a prompt from AwesomeChatGPT.

I want you to act as an AI writing tutor. I will provide you with a student who needs help improving their writing and your task is to use artificial intelligence tools, such as natural language processing, to give the student feedback on how they can improve their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form.

To achieve this goal, we need to change the source code of the repo we created for you in the above step. You can find the GitHub repo address from the flow details page.

get the source code for the ChatGPT Telegram bot

Then go to the src file of your source code repo and replace the existing ChatGPT prompt “You are a helpful assistant answering questions on Telegram” with any of the ChatGPT prompt you need in line 22.

Before:

let system = "You are a helpful assistant answering questions on Telegram.\n\nIf someone greets you without asking a question, you can simply respond \"Hello, I am your assistant on Telegram, built by the Second State team. I am ready for your question now!\"";

After:

let system = "I want you to act as an AI writing tutor. I will provide you with non-native english speakers who needs help improving their writing and your task is to use artificial intelligence tools, such as natural language processing, to give the student feedback on how they can improve their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form. \n\nIf someone greets you without asking a question, you can simply respond \"Hello, I am your assistant on Telegram, built by the Second State team. I am ready for improving your English writing now!\"";

After making the necessary changes, push them to the GitHub repository and flows.network will automatically build your function. Once the build is complete, you will receive a customized ChatGPT Telegram bot for your specific use, eliminating the need to prompt ChatGPT when you strat a new conversation.

Access external web services

The flow function behind the bot has access to the web. That allows the Telegram to use the latest information and web services injunction with ChatGPT. In another word, you can build ChatGPT plugin-like functionalities into your own bot. This example shows how to make HTTPS requests to an external web service to look up the current weather and parse the result from the response JSON data.

fn get_weather(city: &str) -> Result<ApiResult, String> {
let mut writer = Vec::new();
let api_key = std::env::var("API_KEY").unwrap();
let query_str = format!(
"https://api.openweathermap.org/data/2.5/weather?q={city}&units=metric&appid={api_key}"
);

request::get(query_str, &mut writer)
.map_err(|e| e.to_string())
.and_then(|_| {
serde_json::from_slice::<ApiResult>(&writer).map_err(|_| {
"Please check if you've typed the name of your city correctly".to_string()
})
})
}

Combining ChatGPT with the web, your Telegram bot could be the conversational UI for complex applications. You can also call external web services to perform real world actions — for example, you could call the Twilio API to make a phone call based on ChatGPT outputs.

What’s next?

With flows.network, you can create a ChatGPT-powered Telegram bot in just three minutes. You can personalize your bot by making changes directly to the bot’s source code. For example, you can customize it with your favorite prompts or use external web services to provide additional context or to perform real world actions.

What are you waiting for? Give it a try today! Feel free to join our Discord server to stay updated or to provide feedback.

· 2 min read

To build Telegram bots or automate Telegram-related workflows on the flows.network platform, a Telegram bot token is necessary. flows.network supports Telegram integration, which enables you to create Telegram bots with ease.

Here's a step-by-step guide on how to get a bot token for Telegram. Then you can use this token to create your own telegram bot.

  1. Open Telegram on your device or web browser and search for the "BotFather" account @BotFather.
  2. Start a chat with the BotFather and type "/newbot" to create a new bot. BotFather will ask for a name and username for your bot.
  3. Choose a name and username for your bot. The name can be anything you like but the username must end with "bot". For example, the name can be "FlowsNetworkBot" and the username can be "@flowsnetworkbot".
  4. If your chosen username is available, BotFather will provide you with a unique API token for your bot. You should copy and store this token carefully.
  5. Once you have the API token, you can start building your Telegram bot on flows.network. Here is a template project to build a ChatGPT-powered Telegram bot on flows.network.

Note that the API token is a unique identifier for your bot, which allows you to send requests to the Telegram Bot API. Make sure to keep this token safe and secure, as anyone with access to this token can potentially control your bot.

In conclusion, getting a bot token for Telegram is a simple process that can be completed with BotFather's help. Once you have the token, you can start building your bot on flows.network.

Next step is to read the telegram-gpt repo and follow the instructions to build your own chatgpt bot.