Apache NiFi: A Complete Guide E-book.
We are proud to present you our first e-book, created by GetInData specialists. Apache NiFi: A Complete Guide is the result of long and fruitful work…
Read moreGenerative AI and Large Language Models are taking applied machine learning by storm, and there is no sign of a weakening of this trend. While it is important to remember that this stream is not likely to wipe out other branches of machine learning and there are still many things to be careful about when applying LLMs (check my previous blog post for some examples), the use of these models is unavoidable in some areas. But to leverage them really efficiently, some pieces must be put together in a proper way. In particular, the use of powerful language understanding capabilities have to be backed up by a clean and well-organized implementation and pleasant user experience. With a very simple example, we will demonstrate how to achieve these 3 essential goals using commercial LLM APIs, Kedro and Streamlit respectively.
Imagine that you have to read some very technical document, not being an expert in the field. The document surely contains a lot of domain-specific wording, difficult to understand terms and possibly also many outside references. Reading such a document can be quite a pain; you spend more time looking for explanations over the Internet than on the document itself. Nowadays, having the power of Large Language Models at your fingertips, you can make this process a lot faster and easier. LLMs are pretrained on vast amounts of texts from different domains and encode all this broad knowledge in their parameters, also allowing for seamless human-machine interaction using plain, natural language. In some cases, when pretrained knowledge is not enough, there is the possibility of adapting a model to some domain or instruction to perform other forms of finetuning to make it even more useful. This, however, is a quite complex and tricky topic, and we will not focus on it in this article. However, it is under heavy research in our GetInData Advanced Analytics Labs.
So what exactly is the idea behind the LLM Reading Assistant? It is as simple as this:
The usefulness of this kind of tool can be proven in large organizations, where people with different roles (management, business, technical, legal etc.) have to deal with domain-specific documents, and the efficiency of processing them is key. As examples, we can think of:
The solution presented in this article is just a very simple PoC that presents the idea of a Reading Assistant and also shows how you can easily build a quite functional application using a combination of Kedro and Streamlit frameworks, backed up by commercial Large Language Models. To reforge it into a full-scale, production-grade tool, some important developments would be required, e.g.:
Nevertheless, such a demo is always a good start, so let’s dive in to see how it works.
The code of the application described here is publicly available as one of the QuickStart ML Blueprints which are a set of various ML solution examples, built with a modern open-source stack and according to the best data science development practices. You can find the project and its documentation here. Feel free to run and experiment with it, and also explore other blueprints that include classification/regression models, recommendation systems and time series forecasting etc.
Kedro users will surely notice that from this framework’s perspective, the presented solution is very much trimmed down compared to standard Kedro use cases. It consists of only one pipeline (run_assistant) that contains just a single node (complete_request). Since all input to the pipeline is passed via parameters (some of them in a standard way via Kedro conf files, the other via the Streamlit app, which will be explained later) and the only output is the LLM’s response that needs to be printed for the user - the project doesn’t use a data catalog. In this simple PoC there was also no need for MLflow logging; only the local logger was used for debugging purposes. One Kedro feature that is still very helpful is the pipeline configuration mechanism. It turns out that in such a special use case, seemingly not very much aligned with the usual Kedro way of work, it allows for a flexible and efficient integration with the additional user interface layer formed by the Streamlit app.
On top of the Kedro run_assistant pipeline, there is another Python script run_app, that - not surprisingly - defines and runs the Streamlit application. In more detail, it serves the following purposes:
The interesting thing in this setup is the coupling between Streamlit and the Kedro pipeline. Kedro has its own set of parameters stored in conf directory. By default, there are two subfolders there: base and local (you can also define other sets of parameters and use them as different environments). The first one is a set of default parameters for a baseline Kedro run and is stored in Git. The other one is stored only locally. You can use it to store parameters that are specific to your very own environment, which should not be shared around. It is also a good place to put something temporary that you do not wish to overwrite in your base configuration files. This makes parameters.yml in the local subdirectory a perfect place to use as the connection between the parameters entered in the Streamlit interface and the Kedro pipeline. Basically how it works on the example of the Reading Assistant is:
from kedro.framework.session import KedroSession
from kedro.framework.startup import bootstrap_project
bootstrap_project(os.getcwd())
session = KedroSession.create()
api = st.selectbox("Select an API:", ["OpenAI", "VertexAI PaLM", "Azure OpenAI"])
model = st.selectbox("Choose LLM:", model_choices)
mode = st.selectbox("Choose mode:", ["explain", "summarize"])
input_text = st.text_area("Paste term or text:", value="", height=200)
with open("./conf/local/parameters.yml", "w") as f:
yaml.dump(
{"api": api, "model": model, "mode": mode, "input_text": input_text}, f
)
if st.button("Get Answer!"):
# Run Kedro pipeline to use LLM
answer = session.run("run_assistant_pipeline")["answer"]
else:
answer = "Paste input text and click [Get Answer!] button"
Each time the button is clicked, Kedro pipeline is rerun - possibly with new parameter values, if they were updated in the meantime.
And that’s it! This demonstrates a very simple yet effective way of managing parameterizing and running Kedro pipelines via the Streamlit application. Of course, the example is very simple, but you can imagine more complex setups with multiple Kedro pipelines that use more Kedro features. In those scenarios, the Kedro project structure and a well-organized pipelining framework would be more advantageous, also leveraging the ease of building Streamlit applications. Nevertheless, the coupling between those two would remain as simple as above.
If you are interested in other applications of LLMs and potential issues during implementation, check out our other blog posts and keep up with the new ones that are published, especially the one about the Shopping Assistant: an e-commerce conversational tool that provides search and recommendation capabilities using a natural language interface.
Do you have any questions? Feel free to sign-up for a free consultation!
We are proud to present you our first e-book, created by GetInData specialists. Apache NiFi: A Complete Guide is the result of long and fruitful work…
Read moreThese days, companies getting into Big Data are granted to compose their set of technologies from a huge variety of available solutions. Even though…
Read moreA data-driven approach helps companies to make decisions based on facts rather than perceptions. One of the main elements that supports this approach…
Read moreNowadays, companies need to deal with the processing of data collected in the organization data lake. As a result, data pipelines are becoming more…
Read moreFlink is an open-source stream processing framework that supports both batch processing and data streaming programs. Streaming happens as data flows…
Read moreWe are producing more and more geospatial data these days. Many companies struggle to analyze and process such data, and a lot of this data comes…
Read moreTogether, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.
What did you find most impressive about GetInData?