Part II: How We Built An Internal Tools Builder That Lives In Your Python Codebase

Dropbase is a local-first internal tools builder for Python developers. It integrates with any Python codebase and supports custom packages. Call server-side functions, reuse ORM models, import PyTorch or Pandas. In this post, we talk in detail about how we built this framework.

This post is published in 2 parts:

  1. Part I: Why we built it?
  2. Part II: How we built it?

In Part II below, we describe how we built Dropbase. Follow the link if you're looking to read Part I: Why we built it?

Design choices

We first need to set some context on the general design philosophy and design choices for this tool-building framework. The main focus is on improving developer efficiency without compromising security. Here are the main design principles in this paradigm. 

Avoiding repetitive code

One way to improve dev efficiency is to reduce the amount of code needed. There are a few ways to do this:

  • Maximize use of existing codebase. It should leverage an existing codebase so devs don’t have to duplicate code anywhere; UI components should be able to trigger scripts/functions directly
  • Pre-built UI library. The framework should have a set of commonly used UI components so devs don’t have to find or build them from scratch
  • Simplify repetitive tasks. Since building web tools requires writing a lot of code to handle client-server payloads associated with UI rendering, standardizing client-server communications would greatly simplify repetitive tasks

Keeping dev workflows unchanged

This means localhost development, version control, and CI/CD. Storing all files locally would let devs manage and version their codebase with their preferred versioning system (e.g. GitHub, GitLab, or custom). Developers can also set up new or reuse existing CI/CD pipelines.


Data should be processed in the user's machines. Since internal tools act on sensitive data, every aspect of data processing needs to take place in the user's infrastructure, from storing database credentials and function code, to executing those functions and fetching data from APIs.


The framework should be convenient to use and fit the mental model of Python devs. For example, it should allow code autocomplete and ancillary features such as user permissions built-in.

Reducing number of languages

If you are a Python devs, chances are you just want to use it for everything. We wanted devs to be able to build apps with just Python. So even if under the hood we use Javascript to make this work, devs wouldn’t need to actually write any of it. 

In summary, the design philosophy for this framework is to improve developer efficiency by having a small learning curve, avoiding the need to rewrite code or introducing new dev workflows, providing a set of common UI components, and shipping ancillary but critical features out of the box. 

Practically, this means using a declarative UI-building approach, allowing UI components to trigger server-side functions directly, while keeping it all pythonic and dev friendly.

How app-building works in Dropbase

First, what are “apps”? Throughout this post, we define an app as a web tool that has a graphical user interface through which end-users can trigger actions. Apps consist of app pages; pages consist of UI components such as tables, text inputs, and dropdowns. Users can interact with UI components that can  trigger function calls. 

App building is done via App Studio. Studio is a web IDE that lets developers declare and reposition UI components, write and debug Dropbase Functions and bind UI components/events to those functions.

In Studio, devs can declare UI components by selecting them from a list of components and arranging them in a wysiwyg-way. Devs can then bind these UI components to new or existing Dropbase Functions that they could write within Studio itself. Dropbase Functions are Python scripts with pre-defined function signatures for convenience. They are otherwise just like any Python script with standard Python code, to which devs can import other scripts or libraries. 

The idea of declaring UIs, writing functions, and binding of the two forms the core mental model required to build apps in the Dropbase framework. Once an app is built in Studio, devs can “preview” how the app works/looks like to the end-user. When development is completed, apps can be shared with other developers or end-users via granular permissions. 

Switching over to an end-user who receives access to use an app: they can enter data via input forms, click a button in the page, and/or select a different row in a UI table, and through these actions, execute a Python script that is bound to the associated UI component.

Next, we’ll describe how we built this app-building experience.

Overview of framework structure

App files

In Dropbase, apps are a collection of files assembled within a directory. Apps consist of properties (declared UIs and metadata) and user scripts (Python or SQL scripts), all of which are stored as files — you can literally share an app by emailing the app directory as a zip file.

For example, here’s how an app called “demo” is stored in your “workspace”.

File system, includes an app named “demo”

Apps include scripts, validation models, and page property definitions. More generally, they are organized as follows:


    └── APP_NAME/

        └── PAGE_NAME/



            └── scripts/


                └── user_sql.sql

            └── properties.json

        └── properties.json


    └── properties.json

  • workspace/: the main directory that contain all user apps
  • APP_NAME/: a directory that contains all files related to one app
  • PAGE_NAME/:  a directory that contains files related to a page in the app; an app contains one or more pages
  • scripts/: a directory that contains all user scripts (Python or SQL files)
  • a file that contains the State model definition
  • a file that contains the Context model definition
  • properties.json: a file that contains an app page’s properties, such as which UI components it contains

App services

App services work in tandem with app files to enable building and using apps. We built multiple app services to enable UI rendering, app testing, and script running. App services include a client, a server, a task worker, and a language server protocol (LSP).

These services are shipped to developers as docker images (managed via docker-compose) and they all work together to enable writing and running functions, syncing app properties, and code autocompletion. Additionally, we ship a helper package that contains various utilities, including an easy way to interact with relational database connectors.

Here’s roughly what each service does:


The client is in charge of placing and rendering app pages and corresponding UI components. It syncs declared UI components and their properties with the server. The client provides a code editor with auto-complete features to enable easy function writing.


The server processes requests by the client and spins up the task worker to run specific Dropbase Functions. The server also updates page properties via state and context models, which will be discussed later.

Task worker

The task worker’s only job is to run Dropbase Functions (user scripts) specified by the server with the corresponding user inputs passed through via the client. It’s built as an async task runner that executes user code.


The LSP makes it more convenient to write user scripts in Studio and update them in the filesystem. It gives developers auto-complete features for standard Python, imported packages, libraries, and user scripts. It enables code edits to be saved in the filesystem. It also facilitates access to state and context models (again, more on this later).

How app services interact with app files

Dropbase app services interact with app files in specific ways to make app building, testing, and function running possible: through the client, devs write functions and declare UIs; events such as button clicks call the server to run functions; the task worker executes each of those functions; and the LSP makes writing functions convenient, and make it easy to save code edits in the corresponding files.

Here’s a high level overview of the flow

  • When a developer adds a new UI component to a page from Studio (client), the properties of this component are pulled from a corresponding template
  • Using these properties, the client renders a component properties form
  • When the developer edits the form (button labels, number of options in a dropdown), an updated version of page properties is sent to the server
  • The server validates the incoming page properties, saves them into page’s properties.json file and updates validation models, which can then be used by user scripts stored in the scripts directory within workspace/
  • These user scripts can be updated by editing Python scripts in Studio, with the help of the LSP server. Though conveniently, these functions could be directly updated via VSCode or a text editor too
  • When a developer or end user clicks on a button, a request is sent to the server, which in turns spins up a task worker to execute the task, which usually involves running a script with some user input


An important file worth highlighting is properties.json. The properties.json file, which stores page properties, is a key reference point to keep client and server in sync about an app’s UI components and their properties. The client uses it to render all the page's components and define their behavior. The server uses it to create models that are then used to verify client requests and inputs to user scripts.

Another way to understand properties.json is to think of it as a database that stores page’s properties and metadata of its UI components as a file. The properties.json file is dynamically generated when a developer updates a page in the Studio by adding, editing or deleting components.

How the client communicates with the server

So we have a collection of UI components such as tables, inputs, dropdowns, checkboxes, and modals that developers can declare via Studio. When a component is added or modified, the client updates page properties and sends them to the server, which in turn saves them in properties.json. Using the same properties, the server updates its own validation models. And through these UI components, developers (and end users) can directly trigger server-side functions, which are handled by the server and executed by the task worker. Up to this point, we’ve implicitly assumed client and server just “talk” to each other. But how do we pass payloads from UI components to backend Python functions and vice versa?

The standard way would be to send a `POST` request with a regular payload when an event is triggered by users as they interact with a UI component in an app. This payload could be processed by a Python function and return results that are then rendered by the client. While this is a straightforward method, it would require developers to format payloads for each type of UI component, everytime data is sent to functions and back. This would be tedious, error prone, and require writing more code.

Given our design goals, a more convenient approach would be to standardize inputs and outputs between UI components and user scripts. To do this, we introduced the concepts of Dropbase State and Context. With this approach, Python functions take in user inputs in the form of a standard state object, perform some operations with them, and return a standard context object that the client then uses to re-render UI components. This way, every function could always use the same signature to interact with UI components: functions take a state object and return a context object. With just State and Context, devs could access and modify anything related to UI components via Python functions.

Let’s explore the state and context objects in detail.

State and Context

Here’s a sample user script that fetches data from the web and returns a context object which contains data to be rendered in a table.


A state object holds components’ values that result from user interaction with an app. Examples are the text that a user inputs into an input component or the list of the column values in a selected row in a specific table.

It is important to note that state does not hold values of all the components in a page, just the ones that are updated as a result of user interaction. This is intentional. Based on our observations, user scripts mostly need the data an end user interacts with. Sending all values for everything in a page with each payload hinders app performance and has little practical value. In rare cases when a user script requires the reading of an entire dataset from a table, that data could be re-fetched server-side during function execution.


A context object holds the values that a user script passes back to the client. Examples include new data for a table, updates to component’s properties, or confirmation message for the end user. All page components are represented in the context object. User scripts can assign and update context values during execution. 

For example, to update the data displayed in `table 1`, a developer could assign values to as:

` =df.to_dtable()`

To hide input field on widget 1:

To notify the end user with update:

State and Context in Action

Each component on the client is subscribed to both state and context objects (the client itself stores those objects in React state, not to be confused with Dropbase State). When a component's values are updated by an end user, the component transmits updates to the state object. Then, when the end user triggers an action bound to a user script, the client receives a new context object back from the server after execution. The updated context is then used by the client to re-render the components that were updated as a result of running the user script.

Together, State and Context essentially standardize client-server communication, making it easy for developers to build apps where they can easily access, modify, and pass data between UI components and server-side user scripts conveniently and with just Python.

Making State and Context more convenient to use

When the client sends requests to the server, the state and context objects are sent in JSON format and interpreted as Python dictionaries by the server. Dicts work fine, but are inconvenient during the development stage since Python dicts do not support validations, auto-complete, and make it difficult to access values. 

For example, to get a name field from table 1, a developer would have to write:


To improve the development experience we turned state and context dicts into Pydantic models. This addresses all the dict shortcomings and enables access to values as:

Generating (and updating) State and Context models

State and context objects are different for each page and change every time a developer declares or modifies a UI component in their app. Because of this, we need to update state and context objects, or more explicitly, we need to update their corresponding Pydantic models every time a UI component is added, modified, or deleted. Let’s discuss how we update these models.

State Model

To generate or update a state model, the server iterates over each component and its children defined in page properties. For each child, using its `type` field, the server infers its respective Python type. The state model is then composed using parent and child as attribute names and inferred Python types as data type of the fields.

For example, an input that accepts integers would be mapped to Python type `int`. So the state for a widget with such input would look like:

NOTE: button and text components do not have state values since they do not hold user input values.

Context Model

To generate or update a context model, we use a collection of predefined templates for each UI component. The server iterates over each component in page properties and maps it to its respective template from the collection.

Like with state, we use parent and child as attribute names. However, in state, parents have no other properties besides children, so the state structure is flat and a developer can target each user input value as `state.parent.child`. But in context, both parents and children can have their own properties and we need to target each property individually. Therefore, we group children into sub-categories for convenience. For example, we group table columns under a ‘columns’ field and widget components under ‘components’. When assigning a value to a child in the context, the developer needs to target it as ``

Once both state and context models are generated or updated, they are written to and files using the `datamodel_code_generator` library. With the generated models, user scripts can now use state and context and benefit from the convenience of Pythonic models.

Bringing it all together

To bring it all together, let’s go over a simple flow of adding an input and a button to a widget, binding button click to a Python function, and running it.

When a developer opens up Studio, the client renders the page using page properties from `properties.json`. 

open studio
Open Studio

When the developer adds a new component, the client requests its properties from the server. The server provides component’s properties to client by converting its Pydantic model to JSON

Add component

Here is an example of how such model looks like for an input component:

After the server converts it to JSON format, client receives it as follows

Using these properties, the client renders component properties form that would look like this:


After the developer modifies component properties (e.g. renaming a component’s label) and saves the changes, the client sends the updated page properties to the server.

When the server receives the page properties, it validates and saves it to properties.json file. It then uses these properties to generate new state and context models, which are then saved in and files.

Update component

When the end user first loads a page, the client uses page properties from `properties.json` to render page components. The end user then interacts with the page by adding values to input fields, selecting rows, or clicking buttons. If a button that is associated with (or bound to) a user script is clicked, a POST request is sent to the server to execute that user script. Along with metadata for the user script, the request includes the state in the payload.

Sample request payload

Upon receiving the request, the server first validates the incoming payload and starts a new job in the task worker. The task worker collects the required modules (state, context, user script code) and executes the user script. The result of the task execution is written to a redis server.

As the task worker is executing the request, the client checks the status of the task. Upon task completion, the client reads the results from redis and updates UI components according to context updates. Here’s a sample result:

Result response

Here’s a diagram that summarizes the flow discussed

Running a script


There you have it. A local-first internal tools builder for Python developers with a focus on developer efficiency. By creating a framework for building internal tools that works in an existing codebase, developers can directly access server-side functions, use custom packages, and reuse pre-existing ORM models.

The framework provides a set of UI components so devs don’t need to write frontend code. With a server-side paradigm where UI components can directly trigger Python scripts, the framework aligns with the existing metal models of Python devs. And with a standardized protocol for accessing, modifying, and passing data between client and server, the need for repetitive coding tasks and the learning curve for end-to-end app building are significantly minimized, resulting in less code sprawl and greater developer efficiency.

Because Dropbase apps are just a collection of files, it’s easy to version control the code and integrate with CI/CD the same way developers already do, avoiding the need for new developer workflows. This design also makes it easy to self-host and allow users to process data securely within their own infrastructure.

The overall result is a practical solution for developers looking to efficiently build secure and functional internal tools integrated with their existing Python codebases.

Insights and updates from the Dropbase team.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By signing up you agree to our Terms of Service