Canvas Resume Editor - Graphics Drawing and State Management (Lightweight DOM)
In the front we talked about the design of data structures and clipboard data operations, then these operations are still more inclined to data-related operations, then we now talk about the basic graphics drawing and graphics state management.
- Online Editor./CanvasEditor
- Open source address./WindrunnerMax/CanvasEditor
with respect toCanvas
Related posts to resume editor project.
- The community keeps pushing Canvas on me, and I've learned Canvas to make a resume editor
- Canvas Graphics Editor - Data Structures and History (undo/redo)
- Canvas Graphics Editor - What data is in my clipboard?
- Canvas Resume Editor - Graphics Drawing and State Management (Lightweight DOM)
- Canvas Resume Editor - Monorepo+Rspack Engineering Practice
- Canvas Resume Editor - Hierarchical Rendering and Event Management Capability Designs
graphic design
We do the project or need to start from the requirements, first we need to make it clear that we want to do is the resume editor, then the resume editor requires the type of graphics do not need a lot, just rectangular, pictures, rich text graphics can be, then we can simply abstract it, we just need to think that any element is a rectangle can be completed on this matter.
Because drawing the matrix is relatively straightforward, we can abstract this part of the graph directly from the data structure, the graph element base class of thex, y, width, height
attribute is determined, plus there is a hierarchy, then add anotherz
, in addition to the need to identify the graphic, it is also necessary to set it aid
。
class Delta {
public readonly id: string;
protected x: number;
protected y: number;
protected z: number;
protected width: number;
protected height: number;
}
Then our graphics must be a lot of attributes, such as rectangle is there will be background, border size and color, rich text also need attributes to draw specific content, so we also need an object to store the content, and we are a plug-in implementation, the specific graphic drawing should be implemented by the plug-in itself, this part of the content needs to be subclassed to the specific implementation.
abstract class Delta {
// ...
public attrs: DeltaAttributes;
public abstract drawing: (ctx: CanvasRenderingContext2D) => void;
}
Then when drawing, we consider a two-layer drawing approach, with the inner layer of theCanvas
is used to draw specific graphics, where it is expected that incremental updates will need to be implemented, and the outer layer of theCanvas
is used to draw intermediate states, such as selecting a graphic, multi-selecting, repositioning/sizing the graphic, etc., where it is refreshed in full, and where a ruler may be drawn later.
It is important to note a very important point here, as ourCanvas
is not again a vector graphic, if we are in the1080P
directly on the monitor of the editor'swidth x height
set to the element, that's not going to be a problem, but if at this point it's the2K
or4K
If you have a monitor, you will have a blurring problem, so we need to obtain thedevicePixelRatio
i.e., physical pixels/device-independent pixels, so we can get a good idea of what we're doing by adding a new value to thewindow
Get this value to control theCanvas
elementalsize
Properties.
= width * ratio;
= height * ratio;
= width + "px";
= height + "px";
At this point we also need to deal withresize
problem, we can use theresize-observer-polyfill
to implement this part of the functionality, but note that ourwidth
cap (a poem)height
Must be an integer, otherwise it will result in blurred graphics in the editor.
private onResizeBasic = (entries: ResizeObserverEntry[]) => {
// COMPAT: `onResize`will trigger the first`render`
const [entry] = entries;
if (!entry) return void 0;
// macro task queue
setTimeout(() => {
const { width, height } = ;
= width;
= height;
();
(EDITOR_EVENT.RESIZE, { width, height });
}, 0);
};
In fact, when we implement a complete graph editor, it may not be a complete rectangular node, for example, to draw an irregular graph in the shape of a cloud, we need to place the coordinates of the relevant node in theattrs
and complete the actual drawing of theBezier
The calculation of the curve can be done. But actually we also need to notice a problem, when we click how to determine whether the point is inside or outside the graph, if it is inside the graph then the node needs to be selected when clicking, if it is outside the graph will not select the node, then because we are closed graphs, we can use the ray method to achieve this ability, we will be the point in a direction to make a ray, if the number of nodes traversed is odd, the indicates that the point is inside the graph and if the number of nodes traversed is even, the point is outside the graph.
It's not enough to just draw the graphs, we also need to implement the interaction capabilities associated with the graphs. In the process of realizing the interaction I encountered a tricky problem, because there is noDOM
All operations need to be calculated based on the positional information, for example, to resize the points after selecting a graphic, you need to be in the selected state and click on the position of those points plus a certain offset, and then according to theMouseMove
event to resize the graphic, and there will actually be a lot of interactions here, including multi-selection and drag-and-drop box selection,Hover
The effects, all based on theMouseDown
、MouseMove
、MouseUp
three events to complete, so how do you manage the state as well as draw theUI
Interaction is just a bit of a pain, and here I can only think of carrying different states based on differentPayload
, which in turn draws interactions.
export enum CANVAS_OP {
HOVER,
RESIZE,
TRANSLATE,
FRAME_SELECT,
}
export enum CANVAS_STATE {
OP = 10,
HOVER = 11,
RESIZE = 12,
LANDING_POINT = 13,
OP_RECT = 14,
}
export type SelectionState = {
[CANVAS_STATE.OP]?:
| CANVAS_OP.HOVER
| CANVAS_OP.RESIZE
| CANVAS_OP.TRANSLATE
| CANVAS_OP.FRAME_SELECT
| null;
[CANVAS_STATE.HOVER]?: string | null;
[CANVAS_STATE.RESIZE]?: RESIZE_TYPE | null;
[CANVAS_STATE.LANDING_POINT]?: Point | null;
[CANVAS_STATE.OP_RECT]?: Range | null;
};
Status Management
When implementing the interaction, I thought long and hard about how I should implement this ability better, because as stated above there is noDOM
s, so at the very beginning I went through theMouseDown
、MouseMove
、MouseUp
implements a very confusing state management, based entirely on the triggering of events and then executing the associated side effects and thus calling theMask Canvas
Layer method for redrawing.
const point = (CANVAS_STATE.LANDING_POINT);
const opType = (CANVAS_STATE.OP);
// ...
(CANVAS_STATE.HOVER, );
(CANVAS_STATE.RESIZE, state);
(CANVAS_STATE.OP, CANVAS_OP.RESIZE);
(CANVAS_STATE.OP, CANVAS_OP.TRANSLATE);
(CANVAS_STATE.OP, CANVAS_OP.FRAME_SELECT);
// ...
(CANVAS_STATE.LANDING_POINT, new Point(, ));
(CANVAS_STATE.LANDING_POINT, null);
(CANVAS_STATE.OP_RECT, null);
(CANVAS_STATE.OP, null);
// ...
Then I decided that there was no way to maintain this code, so I changed it and stored all the state I needed in aStore
in my customized event management to notify the state change, and ultimately through the type of state change to strictly control the content to be drawn, which is sort of a layer of abstraction of the relevant logic, except that here it is equivalent to me maintaining a large number of states, and these states are interconnected, so there will be a lot ofif/else
To deal with different types of state changes, and because many of the methods will be more complex, passed multiple layers, resulting in state management, although a little better than before can be clearly know where the state is caused by the change, but in practice is still not easy to maintain.
export const CANVAS_STATE = {
OP: "OP",
RECT: "RECT",
HOVER: "HOVER",
RESIZE: "RESIZE",
LANDING: "LANDING",
} as const;
export type CanvasOp = keyof typeof CANVAS_OP;
export type ResizeType = keyof typeof RESIZE_TYPE;
export type CanvasStore = {
[RESIZE_TYPE.L]?: Range | null;
[RESIZE_TYPE.R]?: Range | null;
[RESIZE_TYPE.T]?: Range | null;
[RESIZE_TYPE.B]?: Range | null;
[RESIZE_TYPE.LT]?: Range | null;
[RESIZE_TYPE.RT]?: Range | null;
[RESIZE_TYPE.LB]?: Range | null;
[RESIZE_TYPE.RB]?: Range | null;
[CANVAS_STATE.RECT]?: Range | null;
[CANVAS_STATE.OP]?: CanvasOp | null;
[CANVAS_STATE.HOVER]?: string | null;
[CANVAS_STATE.LANDING]?: Point | null;
[CANVAS_STATE.RESIZE]?: ResizeType | null;
};
Eventually I thought about it some more, and we're doing it in the browserDOM
operation, thisDOM
Is it really there, or are we in thePC
When you implement window management on the window, is this window really there? The answer is definitely no, although we can use the system or browser provided by theAPI
It's very easy to implement various operations, but in fact some of the content is drawn by the system for us, which is still essentially a graphic, and the events, states, collision detection, etc. are simulated by the system, and ourCanvas
has similar graphical programming capabilities.
Then we can certainly implement something likeDOM
ability, because the ability that I wanted to realize seemed essentially to be theDOM
association with the event, and theDOM
The structure is a very mature design now, which has some great capabilities designed into it, such asDOM
of the event stream, we wouldn't need to flatten each of theNode
Instead, you just need to make sure that the event is from theROOT
nodes start and end up again at theROOT
The end of the tree can be reached at the end of the tree. And the entire tree structure and state is dependent on the user utilizing theDOM
(used form a nominal expression)API
to achieve this, we manage to only have to deal withROOT
Just so it's easy, the next stage of state management is going to be implemented in this way, so let's start by implementing theNode
Base class.
class Node {
private _range: Range;
private _parent: Node | null;
public readonly children: Node[];
// Implement the event stream as simply as possible
// Directly through`bubble`to determine the capture/bubbling
protected onMouseDown?: (event: MouseEvent) => void;
protected onMouseUp?: (event: MouseEvent) => void;
protected onMouseEnter?: (event: MouseEvent) => void;
protected onMouseLeave?: (event: MouseEvent) => void;
// `Canvas`Drawing nodes
public drawingMask?: (ctx: CanvasRenderingContext2D) => void;
constructor(range: Range) {
= [];
this._range = range;
this._parent = null;
}
// ====== Parent ======
public get parent() {
return this._parent;
}
public setParent(parent: Node | null) {
this._parent = parent;
}
// ====== Range ======
public get range() {
return this._range;
}
public setRange(range: Range) {
this._range = range;
}
// ====== DOM OP ======
public append<T extends Node>(node: T | Empty) {
// ...
}
public removeChild<T extends Node>(node: T | Empty) {
// ...
}
public remove() {
// ...
}
public clearNodes() {
// ...
}
}
Then all we need to do next is define something likeHTML
(used form a nominal expression)Body
element, where we set it to theRoot
node, which inherits theNode
node. Here we take over the distribution of events for the entire editor, and events inherited from this can be distributed to child nodes, such as our point-and-click event, which sets theMouseDown
Event handling is sufficient. And here we also need to design the ability to distribute events, we can also implement the event capture and bubbling mechanism, through the stack can be very easy to trigger the event processing out.
export class Root extends Node {
constructor(private editor: Editor, private engine: Canvas) {
super((0, 0));
}
public getFlatNode(isEventCall = true): Node[] {
// Matching is not required for non-default states
if (!()) return [];
// The actual order of event invocations // The rendering order is reversed
const flatNodes: Node[] = [...(), this];
return isEventCall ? (node => !) : flatNodes;
}
public onMouseDown = (e: MouseEvent) => {
(null);
! && ();
};
private emit<T extends keyof NodeEvent>(target: Node, type: T, event: NodeEvent[T]) {
const stack: Node[] = [];
let node: Node | null = ;
while (node) {
(node);
node = ;
}
// Events executed during the capture phase
for (const node of ()) {
if (!) break;
const eventFn = node[type as keyof NodeEvent];
eventFn && eventFn(event);
}
// node itself Just do it.
const eventFn = target[type as keyof NodeEvent];
eventFn && eventFn(event);
// Events executed in the bubbling phase
for (const node of stack) {
if (!) break;
const eventFn = node[type as keyof NodeEvent];
eventFn && eventFn(event);
}
}
private onMouseDownController = (e: ) => {
= (e, );
// Non-default state does not execute events
if (!()) return void 0;
// Get nodes in event order
const flatNode = ();
let hit: Node | null = null;
const point = (e, );
for (const node of flatNode) {
if ((point)) {
hit = node;
break;
}
}
hit && (hit, NODE_EVENT.MOUSE_DOWN, (e, ));
};
private onMouseMoveBasic = (e: ) => {
= (e, );
// Non-default state does not execute events
if (!()) return void 0;
// Get nodes in event order
const flatNode = ();
let next: ElementNode | ResizeNode | null = null;
const point = (e, );
for (const node of flatNode) {
// Currently only`ElementNode`cap (a poem)`ResizeNode`Needs to be triggered`Mouse Enter/Leave`event
const authorize = node instanceof ElementNode || node instanceof ResizeNode;
if (authorize && (point)) {
next = node;
break;
}
}
};
private onMouseMoveController = throttle(, ...THE_CONFIG);
private onMouseUpController = (e: ) => {
// Non-default state does not execute events
if (!()) return void 0;
// Get nodes in event order
const flatNode = ();
let hit: Node | null = null;
const point = (e, );
for (const node of flatNode) {
if ((point)) {
hit = node;
break;
}
}
hit && (hit, NODE_EVENT.MOUSE_UP, (e, ));
};
}
Then next, we just need to define the relevant node types can be, and by distinguishing between different types can be used to achieve different functions, such as graph drawing using theElementNode
node, resize the node using theResizeNode
node, the boxed content uses theFrameNode
node will suffice, so let's look at it here for a momentElementNode
node to represent the actual node.
class ElementNode extends Node {
private readonly id: string;
private isHovering: boolean;
constructor(private editor: Editor, state: DeltaState) {
const range = ();
super(range);
= ;
const delta = ();
const rect = ();
();
= false;
}
protected onMouseDown = (e: MouseEvent) => {
if () {
();
} else {
();
}
};
protected onMouseEnter = () => {
= true;
if (()) {
return void 0;
}
();
};
protected onMouseLeave = () => {
= false;
if (!()) {
();
}
};
public drawingMask = (ctx: CanvasRenderingContext2D) => {
if (
&&
!() &&
!(EDITOR_STATE.MOUSE_DOWN)
) {
const { x, y, width, height } = ();
(ctx, {
x: x,
y: y,
width: width,
height: height,
borderColor: BLUE_3,
borderWidth: 1,
});
}
};
}
ultimate
Here we chatted about how to abstract basic graph drawing as well as state management, because our requirements are here so our graph drawing capabilities will be designed to be relatively simple, while state management is iterated over three scenarios before settling on the best solution through the lightweightDOM
The way to do this is to implement it, so further down the line, we need to talk about how to implement the ability to design for hierarchical rendering and event management.