Objectives of the chapter
- Implementing PostgreSQL-based SDAC (Simple Data Access Layer)
- Switching Stickers Microservices to Use PostgreSQL SDAC
Why Choose PostgreSQL Database?
In fact, it is not necessary to choose PostgreSQL database, here for several reasons:
- PostgreSQL is free and easy to use, lightweight and efficient enough to meet current needs
- PostgreSQL is a mature ecosystem, rich in resources, and easy to troubleshoot.
- Subsequent Keycloak as well as Hangfire/integration with PostgreSQL is easier, and reducing the deployment of a single database instance reduces some of the cost overhead, at least in this simple case of ours
Document-based MongoDB is also a good choice, but for the third consideration above, some of the required dependencies on third-party solutions do not have perfect support for MongoDB, so PostgreSQL was chosen as the database in our case.
Accessing the PostgreSQL database can be done using the system, specifically using the officialNpgsqlpackage, and then use the programming model and programming paradigm to read and write PostgreSQL database. Since it is not particularly convenient to use it in some scenarios, for example, when querying a "sticker" entity according to its Id, you need to build a DbCommand object first, and then pass the Id as a parameter, then execute the DbReader to read in the data row by row, and then build a Sticker object according to the read data, and finally close the connection to the database. object, and finally close the database connection. So, in this case, I will choose a lightweight object mapping framework:Dapperto implement PostgreSQL-based SDAC.
Preparing a PostgreSQL Database and Development Tools
You can choose between two ways to prepare a PostgreSQL database:
- Direct Installation of PostgreSQL Service on Local Machine
- Using Docker
From a development perspective alone, it is simpler to choose the first approach, download the PostgreSQL server installer and install it directly to get it up and running, but in this case, I chose to use Docker to run PostgreSQL. firstly, it is simpler to deploy and distribute, there is a Dockerfile definition file to compile a Docker image and run the container locally; secondly, future cloud deployments in the form of Docker containers will also be simpler and easier, using the web repository provided by Azure. locally run the container, and the Dockerfile file can be easily hosted in a code repository for version control; second, future cloud deployments in the form of Docker containers will also be simpler and more convenient, using the Web App for Containers hosting service provided by Azure, or deploying to the Azure Kubernetes Service (AKS), it is easy to deploy containerized applications directly to Azure.
If you still choose to use the method of installing the PostgreSQL service directly from your local machine, skip the Docker section that follows and go directly to the database and data table creation section.
Running a PostgreSQL database in Docker
You can run a PostgreSQL database directly using the docker run command, but we're going to tweak things a little bit here: we're going to build our own custom PostgreSQL image based on the official PostgreSQL image, the purpose of which I'll describe in the last part of this article.
First, in thesrc
folder, create a sibling folder nameddocker
folder in thedocker
folder, create a file namedpostgresql
folder in thepostgresql
folder, create a newDockerfile
:
$ mkdir -p docker/postgresql
$ cd docker/postgresql
$ echo "FROM postgres:17.0-alpine" >> Dockerfile
this oneDockerfile
Currently there is only one command, which is based on thepostgres:17.0-alpine
This image to build a new image, you can now use the docker build command to create an image based on this Dockerfile by, in the postgresql folder (in the same directory as the Dockerfile), executing the docker build command:
$ docker build -t daxnet/stickers-pgsql:dev .
Note: Here I am using daxnet/stickers-pgsql as the image name because later I need to publish this image to Docker Hub, so the image name has my Registry name in Docker Hub. Please you can decide the mirror name according to your situation.
Note: When selecting a Base Image (i.e., the postgres image in this case), we usually specify a specific tag instead of using the latest tag, in order to prevent version compatibility issues during continuous integration due to the use of a newer version of the image.
We can also use Docker Compose to build an image. In the docker folder, create a new image namedThe document, which reads:
volumes:
stickers_postgres_data:
services:
stickers-pgsql:
image: daxnet/stickers-pgsql:dev
build:
context: ./postgresql
dockerfile: Dockerfile
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=stickersdb
volumes:
- stickers_postgres_data:/data:Z
ports:
- "5432:5432"
The benefit of using Docker Compose is that it allows you to batch build multiple images and you don't need to think about what name and what tag should be used for each image every time you compile, not only that, the same Docker Compose file can be used to define the configuration and parameters for container startup and run the defined containers together.
Note that the Docker Compose filename () here has the word "dev" in it, this is because I intend to include in this Docker Compose file only the containers related to the infrastructure that supports the development environment, such as the database, Keycloak, logging storage, caching, Elasticsearch, and so on, and the docker containers for our case application will not be included in this Compose file in the future. The advantage of this is that it only takes a single Docker Compose command to start up all the infrastructure services that will be used to run and debug our case application, and then we only need to debug our application in the IDE. program in the IDE.
Once you have the above files ready, you can use the following two commands to compile and run the PostgreSQL database service:
$ docker compose -f build
$ docker compose -f up
After successfully starting the container, you should see an output log similar to the one below:
stickers-pgsql-1 | 2024-10-17 12:55:55.024 UTC [1] LOG: database system is ready to accept connections
Creating Databases and Data Tables
You can use a client-side tool such as pgAdmin to connect to the database by first creating a database namedstickersdb
database, then on this database, execute the following SQL statement to create the data table:
CREATE TABLE (
"Id" integer NOT NULL,
"Title" character varying(128) NOT NULL,
"Content" text NOT NULL,
"DateCreated" timestamp with time zone NOT NULL,
"DateModified" timestamp with time zone
);
ALTER TABLE OWNER TO postgres;
ALTER TABLE ALTER COLUMN "Id" ADD GENERATED ALWAYS AS IDENTITY (
SEQUENCE NAME public."stickers_Id_seq"
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1
);
Once the database has been successfully created, it's time for the next step: designing and implementing the PostgreSQL data access layer.
PostgreSQL Data Access Layer
existprevious sessionWe have designed the basic structure of the data access layer, now we just need to extend it, the detailed UML class diagram is as follows:
The yellowish part of the image above is the new classes added this time and the dependencies on external libraries. We will create a newNET Assembly, which contains one main class:
PostgreSqlDataAccessor
, benchmarking the previous design of ourclass libraries and
InMemoryDataAccessor
class, which also implements theISimplifiedDataAccessor
interface, except that its implementation requires access to PostgreSQL via Npgsql. As mentioned above, the Dapper library is also referenced here to simplify the operation.
The reason for placing PostgreSqlDataAccessor in a separate class library, which I described in the previous article, is that PostgreSqlDataAccessor has external dependencies that should not be "polluted" into the core library, and this isolation has the added benefit that different Simplified Data Accessors (SDACs) can be packaged into different component libraries. Such isolation can also bring another benefit, that is, different Simplified Data Accessor (SDAC) can be packaged into different component libraries, which not only improves the testability of the system, but also provides stability and flexibility in the implementation of the application, you can choose different components to access the system.
As you can also see from the chart above, the nextStickersController
will be injected into thePostgreSqlDataAccessor
instances, from here on out.InMemoryDataAccessor
will exit the stage of history.
Next, I will describe thePostgreSqlDataAccessor
Some implementation details in the
Object-Relational Mapping (ORM)
We did not introduce an ORM framework, although the official definition of Dapper is a simple object mapping framework, and object mapping (from C# objects to database tables and fields) can be implemented on Dapper in a variety of different ways when using Dapper directly.PostgreSqlDataAccessor
We know which object we should base the mapping operation on, because the object model hereSticker
A class is a business concept that is defined in theStickersController
in which we can't assume that thePostgreSqlDataAccessor
The type of object operated on in theSticker
class, so there's no way to get in thePostgreSqlDataAccessor
We need to define a simple object-relational mapping mechanism of our own, and then write the mapping directly in the Dapper. So, we need to define a simple object-relational mapping mechanism ourselves, and then write it in thePostgreSqlDataAccessor
This mechanism is used to generate the mappings that Dapper can use.
Currently our design is still relatively simple, with only one business object:Sticker
and there are no other objects directly related to it, so it's as simple as that, and the problem becomes: map an object to a data table, and map the properties of that object to the fields of the table, and, in the application, it's all about using the existing data tables and fields, and there's no need to create them at application startup, so we don't even need to define the types and constraints of the fields on the mapping. mapping, we don't even need to define the types and constraints of the fields.
We can use C#'s Attribute to specify the object being modified and which fields of which table its attributes should be mapped to, e.g. two Attributes can be defined in:
[AttributeUsage()]
public sealed class TableAttribute(string tableName) : Attribute
{
public string TableName { get; } = tableName;
}
[AttributeUsage()]
public sealed class FieldAttribute(string fieldName) : Attribute
{
public string FieldName { get; } = fieldName;
}
Then, on the model type, these Attributes are applied:
[Table("Stickers")]
public class Sticker(string title, string content): IEntity
{
public Sticker() : this(, ) { }
public int Id { get; set; }
[Required]
[StringLength(50)]
public string Title { get; set; } = title;
public string Content { get; set; } = content;
[Field("DateCreated")]
public DateTime CreatedOn { get; set; } = ;
[Field("DateModified")]
public DateTime? ModifiedOn { get; set; }
}
In the code above, theSticker
Classes are passed through theTableAttribute
map toStickers
data table, while theCreatedOn
respond in singingModifiedOn
attribute is passed through theFieldAttribute
Maps to theDateCreated
cap (a poem)DateModified
field, while the others do not use theFieldAttribute
attribute of a field, the attribute name is used as the field name by default. So, it is possible to use the following methods to get the table names and field names in the database based on the type of object passed in:
private static string GetTableName<TEntity>() where TEntity : class, IEntity
{
return typeof(TEntity).IsDefined(typeof(TableAttribute), false)
? $"public.{typeof(TEntity).GetCustomAttribute<TableAttribute>()?.TableName ?? typeof(TEntity).Name}"
: $"public.{typeof(TEntity).Name}";
}
private static IEnumerable<KeyValuePair<string, string>> GetColumnNames<TEntity>(
params Predicate<PropertyInfo>[] excludes)
where TEntity : class, IEntity
{
return from p in typeof(TEntity).GetProperties()
where && && !(pred => pred(p))
select (typeof(FieldAttribute), false)
? new KeyValuePair<string, string>(, <FieldAttribute>()?.FieldName ?? )
: new KeyValuePair<string, string>(, );
}
It is then possible to set thePostgreSqlDataAccessor
Use these two methods to build SQL statements in the For example, in theAddAsync
Methods in:
public async Task<int> AddAsync<TEntity>(TEntity entity, CancellationToken cancellationToken = default)
where TEntity : class, IEntity
{
var tableName = GetTableName<TEntity>();
var tableFieldNames = (", ",
GetColumnNames<TEntity>(p => == nameof())
.Select(n => $"\"{}\""));
var fieldValueNames = (", ",
GetColumnNames<TEntity>(p => == nameof())
.Select(n => $"@{}"));
var sql = $@"INSERT INTO {tableName} ({tableFieldNames}) VALUES ({fieldValueNames}) RETURNING ""Id""";
await using var sqlConnection = new NpgsqlConnection(connectionString);
var id = await <int>(sql, entity);
= id;
return id;
}
included among theseGetColumnNames
methodologicalexcludes
parameter is an assertion delegate through which you can specify properties that do not need to get a field name. For example, the aboveAddAsync
method that adds a row to the data table in theINSERT
statement, you don't need to insert the Id value (instead, you need to get the Id value returned by the database), so you don't need to get the field name corresponding to the Id attribute when building this SQL statement.
Constructing SQL Field Names from Lambda Expressions
In ISimplifiedDataAccessor, the first parameter of the GetPaginatedEntitiesAsync method is a Lambda expression, which specifies the object attributes to be used for sorting via a Lambda expression:
Task<Paginated<TEntity>> GetPaginatedEntitiesAsync<TEntity, TField>(Expression<Func<TEntity, TField>> orderByExpression,
bool sortAscending = true, int pageSize = 25, int pageNumber = 0,
Expression<Func<TEntity, bool>>? filterExpression = null, CancellationToken cancellationToken = default)
where TEntity : class, IEntity;
However, when we query the database using Dapper and Npgsql, we are required to construct SQL statements, so here is where we need to convert Lambda expressions to SQL statements, with a focus on how to get theORDER BY
The name of the field in the clause. The basic idea is to convert the Lambda expression to a MemberExpression object, and then just get the Name attribute of the Member property through the MemberExpression. There is a special case where the incoming Lambda expression could be a Convert method call (refer to thepreceding sectionmethod implementation), then you need to convert the parameters of the Convert method to MemberExpression first, and then get the property name. The code is as follows:
private static string BuildSqlFieldName(Expression expression)
{
if (expression is not MemberExpression && expression is not UnaryExpression)
throw new NotSupportedException("Expression is not a member expression");
var memberExpression = expression switch
{
MemberExpression expr => expr,
UnaryExpression { NodeType: } unaryExpr =>
(MemberExpression),
_ => null
};
if (memberExpression is null)
throw new NotSupportedException("Can't infer the member expression from the given expression.");
return (typeof(FieldAttribute), false)
? <FieldAttribute>()?.FieldName ??
: ;
}
Then, converting the Lambda expression into an ORDER BY clause can be implemented like this:
var sortExpression = BuildSqlFieldName();
if (!(sortExpression))
{
($@"ORDER BY ""{sortExpression}"" ");
// ...
}
By this point, perhaps you're wondering what's going on in themethod, which reads in the sort field name as a string from a client-side RESTful API request, in which this string sort field name is converted to a Lambda expression and then passed to the
ISimplifiedDataAccessor
(That's actually what's going on here.PostgreSqlDataAccessor
), but by the timePostgreSqlDataAccessor
In this case, the Lambda expression is converted back to a string for splicing SQL statements, which is redundant, isn't it? Not really, because from an overall design perspective, theISimplifiedDataAccessor
The use of Lambda expressions to define sorting and filtering fields is reasonable, because this design not only meets the SQL string splicing implementation described in this article, but also meets the design of data access components based on Lambda expressions (such as Entity Framework). Therefore, because of a specific implementation of the specificity of a more general design, to be more specific, here the Lambda expression and then converted into a field name string, isPostgreSqlDataAccessor
The specificity of the design is causing the problem, not the generality of the design itself. Of course, Dapper has some unofficial extension libraries that allow direct use of Lambda expressions for sorting and filtering, but I still don't want to introduce too many external dependencies here, and make the problem a bit more straightforward and simple, as well as leading up to the design problem and solution idea here.
Building WHERE clauses based on Lambda expressions
In StickersController, there is the following code, the purpose of this code is to create a "sticker", first determine whether the same title of the sticker already exists:
var exists = await <Sticker>(s => == title);
if (exists) return Conflict($"""Sticker "{}" already exists.""");
existISimplifiedDataAccessor
Middle.ExistsAsync
method determines the filtering conditions of the data through a Lambda expression parameter, so according to thePostgreSqlDataAccessor
In this case, the Lambda expression needs to be converted into a WHERE clause, so that the SQL statement can be executed on the Dapper to perform the query. For example, assuming that the title in the above code is "This is a test sticker", the resulting WHERE clause would look like this:
WHERE "Title" = 'This is a test sticker'
Here is the code that converts a Lambda expression into a SQL WHERE clause:
private static string BuildSqlWhereClause(Expression expression)
{
// Determine the comparison operator in SQL based on the type of expression.
var oper = switch
{
=> "=",
=> "<>",
=> ">",
=> ">=", => "<>
=> "<", => ">", => ">
=> "<=", =>
_ => null
};
// Currently only the operators listed above are supported
if ((operator)) throw new NotSupportedException("The filter expression is not supported.");; // If ((operator)) throw new NotSupportedException("The filter expression is not supported.").
if (expression is not BinaryExpression { Left: MemberExpression leftMemberExpression } binaryExpression)
throw new NotSupportedException("The filter expression is not supported.");
// Get the corresponding field name in the database
var fieldName = BuildSqlFieldName(leftMemberExpression); string?
var fieldName = BuildSqlFieldName(leftMemberExpression); string?
// Get the value of the field to be used in the WHERE clause, if BinaryExpression
If the right part of the BinaryExpression // is a constant expression, the constant is used directly, otherwise the value of the field is calculated using the
// Reflection is used to compute the value of the field.
switch ()
{
case ConstantExpression rightConstExpression.
FieldValue = FormatValue();
break; }
case MemberExpression rightMemberExpression: fieldValue = FormatValue(); break; case MemberExpression rightMemberExpression.
var rightConst = as ConstantExpression; var member = ;as ConstantExpression; var member = ;as ConstantExpression
var member = ;
fieldValue = FormatValue(member?.GetField())
GetField() .GetValue(rightConst?.Value)); ; member = ; fieldValue = FormatValue(member?.GetField() ?
GetValue(rightConst?.Value)); break;
}
// Return the components of the WHERE clause
if (! (fieldValue))
return $"""
"{fieldName}" {oper} {fieldValue}
""";
throw new NotSupportedException("The filter expression is not supported.");
// Native methods for formatting field values, e.g., if the value is a string, the value is formatted according to PostgreSQL's
// SQL statement specification, which puts single quotes around the value.
string? FormatValue(object? value)
{
return value switch
null => null => null => null
string => $"'{value}'", _ => ()
_ => ()
}
}
}
I've added some notes, but it basically boils down to this:
- Ensure that the Lambda expression is a Binary Expression
- Get the comparison operator from this expression
- Get the field names from the left part of the Binary Expression
- Getting field values from the right part of a Binary Expression
- Stitching field names, operators, and field values into a conditional statement
- Currently, only a limited number of comparison operators are supported, and complex conditional expressions (AND, OR, etc.) are not supported.
If you are interested you can also continue to extend the above method to support more complex WHERE clause construction logic, which will not be expanded here.
Implementation of Paged Queries under PostgreSQL
Pagination queries need to be implemented at the database level for the reasons given in theDynamically Building Lambda Expressions to Sort Data by Specified Fields on Core Web APIsI've already covered this in the article, so I won't ramble on about it. A key point here is how to get both the dataset of the current page and how many records there are in a single database query. There are quite a few ways to implement this logic in PostgreSQL, and I've chosen a simpler one, which doesn't necessarily have the best performance, but it's good enough for now.
You can use a SELECT statement like this one to achieve a return of both the paged data set and the total number of data items:
SELECT "Id" Id,
"Title" Title,
"Content" Content,
"DateCreated" CreatedOn,
"DateModified" ModifiedOn,
COUNT(*) OVER() AS "TotalCount"
FROM
ORDER BY "Id"
OFFSET 0 LIMIT 20
After executing this SQL statement, we cannot pass the Dapper's<TEntity>(sql)
The call directly converts the result set toSticker
object list, because the return field of this SQL statement contains not only theSticker
object, and it also contains aTotalCount
Fields. It is straightforward to convert the query result set toSticker
The list of objects will be lostTotalCount
Information.
So, the only thing that can be used here is the(sql)
This function is overloaded (note that there is no generic parameter here) to get a dynamic (dynamic
) list of objects, since these dynamic objects themselves are allIDictionary<string, object>
implementation type, so when reading in these objects, you just need to create the Sticker object directly and then, through reflection, assign the value of each field to it:
private static TEntity? BuildEntityFromDictionary<TEntity>(IDictionary<string, object> dictionary)
where TEntity : class, IEntity
{
var properties = from p in typeof(TEntity).GetProperties()
where && && (())
select p;
var obj = (TEntity?)(typeof(TEntity));
foreach (var property in properties) (obj, dictionary[()]);
return obj;
}
Does the use of dynamic and reflection here affect performance? Using dynamic and reflection is of course not as good as accessing objects directly, but the performance loss is negligible compared to the latency of accessing external storage mechanisms and network transfers. In fact, if you use an ORM framework instead of splicing SQL statements yourself, there will be performance losses not only in the implementation of certain techniques (mapping, table joins, etc.), but also in the implementation of additional functionality (object state tracking, etc.), so there's no need to worry about it.
Detailed code to achieve paging is not listed here, please refer directly to the case code in this chapter can be.
Using PostgreSqlDataAccessor on StickersController
After successfully realizing thePostgreSqlDataAccessor
afterwardsStickersController
Using it on the Core Web API application becomes very simple. First, provide the PostgreSQL connection string as a configuration parameter to the Core Web API application, which we're still in the debugging phase and will put in thefile, just add a db attribute to this file and specify the connection string:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"": "Warning"
}
},
"db": {
"connectionString": "User ID=postgres;Password=postgres;Host=localhost;Port=5432;Database=stickersdb;Pooling=true;Connection Lifetime=0;"
}
}
Then, in theThe project adds a description of the
project's references, and then in the
file, place the
PostgreSqlDataAccessor
Just inject it:
var dbConnectionString = ["db:connectionString"];
if ((dbConnectionString))
throw new ApplicationException("The database connection string is missing.");
<ISimplifiedDataAccessor, PostgreSqlDataAccessor>(_ =>
new PostgreSqlDataAccessor(dbConnectionString));
Up to this point.StickersController
It is then possible to pass thePostgreSqlDataAccessor
to use PostgreSQL databases now, and because we used a rational design, we were able to achieve a seamless replacement at the data access layer, with no modifications to the entire process.StickersController
A line of code in the
Getting the program up and running
Start the PostgreSQL database first (I use docker to start it):
$ cd docker
$ docker compose -f up
Then, in Visual Studio 2022 or JetBrains Rider, press F5 to debug the application directly and create a couple new pieces of sticker data (I'll just create one here to minimize space):
daxnet@daxnet-HP-ZBook:~/Projects/stickers/docker$ curl -X 'POST' \
'http://localhost:5141/stickers' \
-H 'accept: */*' \
-H 'Content-Type: application/json-patch+json' \
-d '{
"title": "Test Stickers",
"content": "这是一张Test Stickers。"
}' -v && echo
Note: Unnecessary use of -X or --request, POST is already inferred.
* Host localhost:5141 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:5141...
* Connected to localhost (::1) port 5141
> POST /stickers HTTP/1.1
> Host: localhost:5141
> User-Agent: curl/8.5.0
> accept: */*
> Content-Type: application/json-patch+json
> Content-Length: 73
>
< HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Tue, 22 Oct 2024 13:26:19 GMT
< Server: Kestrel
< Location: http://localhost:5141/stickers/7
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
{"id":7,"title":"Test Stickers","content":"这是一张Test Stickers。","createdOn":"2024-10-22T13:26:19.834994Z","modifiedOn":null}
Then, call the Get Stickers API with 2 records per page, sorted in descending order by sticker creation time:
daxnet@daxnet-HP-ZBook:~/Projects/stickers/docker$ curl "http://localhost:5141/stickers?sort=CreatedOn&asc=false&page=0&size=2" | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 310 0 310 0 0 91122 0 --:--:-- --:--:-- --:--:-- 100k
{
"items": [
{
"id": 7,
"title": "Test Stickers",
"content": "这是一张Test Stickers。",
"createdOn": "2024-10-22T13:26:19.834994Z",
"modifiedOn": null
},
{
"id": 6,
"title": "this is a text",
"content": "test",
"createdOn": "2024-10-22T12:20:44.83564Z",
"modifiedOn": null
}
],
"pageIndex": 0,
"pageSize": 2,
"totalCount": 5,
"totalPages": 3
}
Off-topic: database and data table initialization
One question is, if I am running the PostgreSQL database container for the first time, or the volume that the container depends on is deleted, and when I run the PostgreSQL container again, the database and data tables are gone, how do I go about creating new databases and data tables? Actually, the PostgreSQL Docker image itself supports database initialization scripts, that is, after the database is started normally, PostgreSQL will start the database from the/
directory to read in the SQL files one by one in alphabetical order and then execute them one by one. So, the problem becomes very simple, it is just to put the initialization SQL script files for the database and data tables in this directory, but be careful:Statements in the SQL file must be idempotent。
Add an extra line to the Dockerfile in the docker\postgresql directory:
FROM postgres:17.0-alpine
COPY *.sql //
Then, create a new file named1_create_stickers_db.sql
The SQL file with the contents of:
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET transaction_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
DROP TABLE IF EXISTS ;
SET default_tablespace = '';
SET default_table_access_method = heap;
CREATE TABLE (
"Id" integer NOT NULL,
"Title" character varying(128) NOT NULL,
"Content" text NOT NULL,
"DateCreated" timestamp with time zone NOT NULL,
"DateModified" timestamp with time zone
);
ALTER TABLE OWNER TO postgres;
ALTER TABLE ALTER COLUMN "Id" ADD GENERATED ALWAYS AS IDENTITY (
SEQUENCE NAME public."stickers_Id_seq"
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1
);
Recompile the Docker image to restart the PostgreSQL container, at which point the database and data tables are automatically created if the volume used by that container does not exist.
summarize
This article is rather long, and is presented from a few key pointsPostgreSqlDataAccessor
problems encountered during the realization of theStickersController
Switch toPostgreSqlDataAccessor
and the initialization of the database are briefly described. The next talk will start integrating the authentication and authorization sections, and it is highly recommended that interested readers read the following article first:
- 《Implementing multi-tenancy in Keycloak and authentication under Core》
- 《Implementation of Authorization in Keycloak》
- 《 Keycloak-based Multi-tenant User Authorization Implementation under Core Web API》
source code (computing)
The relevant source code for this chapter is here:
/daxnet/stickers/tree/chapter_3/
After downloading the source code, go to the docker directory, then compile and start the container:
$ docker compose -f build
$ docker compose -f up
You can now open the solution file directly with Visual Studio 2022 or JetBrains Rider and launch it for debugging runs.