Location>code7788 >text

G-Data Structures-G

Popularity:988 ℃/2024-10-16 10:49:48

T1 gap

image

First of all, there are a lot of partial points in this question, so it's straightforward to $ O (n^2) $ enumerate pairs of points and get 70 points by differencing on the tree.

Then the positive solution has almost nothing to do with the partial score.

first thing I see

\[ans_u = \sum_{x ∈ subtree_u} \sum_{y ∈ subtree_u} {\min{ (\vert a_x - a_y \vert , \vert b_x - b_y \vert ) }} \]

There is a $ min $ operation in it, which makes it hard to maintain with data structures, so this is the time to transform the question.

\[a+b = \min { (a,b) } + \max { (a,b) } \]

This equation is obvious, so with this equation we can convert the $ \min $ operation to the $ \max $ operation and thesimple summation (math)Operation.

But we're turning a dan. Isn't it just as hard to maintain to take $ min $ as it is to take $ max $?

You're right, but the fetch $ max $ operation can be converted again.

In the original problem, a point with eigenvalues $ a_i , b_i $ can be mapped to a point $ (a_i,b_i) $ on a planar rectangular coordinate system, then the contribution between the two points $ (a_i,b_i) $ and $ (a_j,b_j) $ is $ \min{(\vert a_i - a_j \vert , \vert b_i - b_j \vert)} $ , that is, $ \vert {a_i - a_j} \vert + \vert {b_i - b_j} \vert - \max{(\vert {a_i - b_j} \vert - \max) vert)} $ , which is $ \vert {a_i - a_j} \vert + \vert {b_i - b_j} \vert - \max{(\vert a_i - a_j \vert , \vert b_i - b_j \vert)} $ .

Then $ \max{(\vert a_i - a_j \vert , \vert b_i - b_j \vert)} $ that isChebyshev distance

Learning Chebyshev distance can be seenThis blog.

after thathumankind(not me, anyway) foundThe Chebyshev distance can be translated into the Manhattan distance This means that the operation of taking $ max $ can be converted into an absolute value summation operation, and the whole problem can be converted into four absolute value summation operations.

How do you turn it?

Let's look at this through the image, which is all the points on the plane rectangular coordinate system that have a Chebyshev distance of 1 from the origin:

image

The four sides of a square of length 2.

Then we are looking at all the points on the plane rectangular coordinate system that have a Manhattan distance of 1 from the origin:

image

The four sides of a square with side length $ \sqrt 2 $.

Since they're all squares, I'll just give him a twist and multiply the side lengths by $ \frac{\sqrt{2}}{2} $ .

A matrix is supposedly used to rotate a coordinate system, but I don't know how to do that at all.

And because this is two-dimensional, we can derive it in imaginary numbers.

$ x + y i $ denotes the point $ (x,y) $ , and an imaginary number multiplied by an imaginary number means: the modulus of the two is multiplied and the principal values of the spoke angles are added.

So rotating a two-dimensional point can be derived in terms of imaginary numbers. Then rotating $ (x,y) $ counterclockwise by 45 ° and multiplying by $ \frac{\sqrt{2}}{2} $ , is equivalent to multiplying by $ \frac{1}{2} + \frac{1}{2} i $ . The point $ (x,y) $ also becomes $ (\frac{x-y}{2} , \frac{x+y}{2} ) $ .

Of course you rotate 45°, 135°, -45°, -135° all right, and it will generally transform $ (x,y) $ into $ (\frac{x+y}{2} , \frac{x-y}{2} ) $.

Next, let $ a1_i = \frac{a_i + b_i}{2} , b1_i = \frac{a_i - b_i}{2} $, and the whole problem is transformed into a single equation:

\[ans_u = \sum_{x ∈ subtree} \sum_{y ∈ subtree} \vert a_i - a_j \vert + \vert b_i - b_j \vert - \vert a1_i - a1_j \vert - \vert b1_i - b1_j \vert \]

That is, it maintains the sum of four absolute values.

This thing can then be computed partitionally, specifically, for $ ans[1,n] $ (the value domain here), into two points both on the same side of the midpoint, and two points on opposite sides of the midpoint, ie:

\[ans[1,n] = ans[1,mid] + ans[mid+1][n] + cnt[1,mid] \times sum[mid+1][n] - cnt[mid+1][n] \times sum[1,mid] \]

Just maintain it directly in a line tree and merge it directly when merging, but you need to rewrite the $ push_up $ function.

Time complexity $ O (nlogn) $, but the constants are huge.

Listening to wkh you can learn about $ FHQ treap $ of intersecting sets merging, for this problem also $ O (n log n) $, but why is he the optimal solution ah.

\[END \ \ \ OF \ \ \ T1 \]

\[\]

\[\]

\[\]