DP Overview
DP (Dynamic programming), is a method based on partitioning, which decomposes the original problem into simple sub-problems to solve complex problems.
Dynamic programming tends to be much less time consuming than the plain (exploded search) solution.
Dynamic Programming and Recursion
As I said before, dynamic programming is also a partitioning idea, and recursion is even more of a traditional partitioning idea, but the time complexity is very different, why?
Dynamic programming istop-up idea, and recursion istop-down Solution.
Top Up and Top Down?
top-up
The meaning is simple to derive from the bottom up:\(f(1) \rightarrow f(2) \rightarrow \dots \rightarrow f(n - 1) \rightarrow f(n)\)。
This is why dynamic programming algorithms have moved away from recursive functions in favor of iterative loops to push to.
top-down
Conversely, top-down is pushing from the top down, hitting the bottom before returning the result back.
\(f(n) \rightarrow f(n - 1) \rightarrow \dots \rightarrow f(2) \rightarrow f(1) \rightarrow f(2) \rightarrow f(3) \rightarrow \dots \rightarrow f(n - 1) \rightarrow f(n)\)
This is why recursion has a much higher time complexity than dynamic programming.
We can see that dynamic programming is more like a plus version of recursive algorithms.
state transfer equation
The transfer of state equation, is the formula for how to transfer the subproblem to the father problem.
In simple DP, the transfer equations can be directly applied to burst search algorithms such as dfs, bfs, and so on.
The hardest part of DP is laying out the state transfer equations; without them, it's all for naught.
Example: Set\(f_i\) is the first step in the series\(i\) is the number for which the state transfer equation for the Fibonacci series is\(f_i = f_{i - 1} + f_{i - 2}\)。
DP as follows:
f[1] = 1;
f[2] = 1;
for (int i = 3; i <= n; i++)
f[i] = f[i - 1] + f[i - 2]; // transfer the equation
cout << f[n].
Similarly, we can apply the transfer equation to recursive violence:
int f(int n)
f(int n)
if (n == 1 || n == 2)
return 1; return f(n - 1) + f(n - 2); // Transfer the equation.
return f(n - 1) + f(n - 2); // transfer equation
}
Dynamic planning elements
-
optimal substructure: The optimal solution of the problem contains the optimal solution of the subproblem. That is: local optimal solution = global optimal solution.
-
lack of validity:
-
When deriving the later states, one only cares about the earlier state values, not how they were derived.
-
After a state is determined, it does not change the earlier decisions because of the later ones.
-
-
overlapping subproblem: Duplicate states may be generated when different decisions reach the same state, and to avoid unnecessary computation, we typically use theMemorized Search(which stores the new state as it is computed over and over again for next time use) to solve this, which is the classicSpace for time。
If you don't meet these three points, do you still want to DP and peach?
Definition of state
Foreword: space for time
Quite simply named for the cost of using space to make sure it doesn't time out.
Status?
The state, in layman's terms, is that you\(f_{xxx}\) What it represents. For example, in the Fibonacci series\(f_i\) The representative is the first\(i\) For what it is.
For status:
-
The more states there are, the more information is represented and the more space is available。
-
Conversely.The fewer the states, the less information is represented and the less space is available。
At the time of our state definition, there may be these cases:
\(Partial case \begin{cases} Too few states? \begin{cases} Too little information & No solution \\\\ Too little information & Doesn't satisfy dynamic planning elements \end{cases} \\\\ Too many states? \begin{cases} Too much space & MLE \\\\ takes too much time to update states & TLE \end{cases} \end{cases}\)
Therefore, the state and transfer of state equations are the most difficult parts of the whole dynamic programming, think clearly about these two points, the problem will be solved.
bibliography
/wiki/dynamic planning
Five basic algorithms of the dynamic programming algorithm DP dynamic programming
Dynamic Programming Solution Framework | Labuladong's Algorithm Notes
1. Optimal Substructures | The Beauty of Data Structures and Algorithms
example
Example 1 Ideas
Pure DP
Click on me to see the title
Didn't look at the data: nice dfs!
Note: Two cases
-
Take this item.
-
3 times the bonus?
-
1x bonus?
-
-
Do not take this item
ll dfs(int i, int now, ll cnt)
{
if (i == n + 1)
return cnt;
if (!((now + 1) % 3) && ((now + 1) >= 3))
return max(dfs(i + 1, now + 1, cnt + (a[i] * 3)), dfs(i + 1, now, cnt));
else
return max(dfs(i + 1, now + 1, cnt + a[i]), dfs(i + 1, now, cnt));
}
Let's look at the title and see at a glance the state for:\(f_i\) preindex\(i\) Maximum prizes awarded for individual items.
However, we find no satisfaction without posteriority.
Based on the above approach, we try to optimize using the cost of space.
Change the status to:\(f_{i, j}\) preindex\(i\) items, the current number of items is taken as the remainder\(3\) because of\(j\) The biggest prize won at the time.
\(f{i, j} = \begin{cases} j = 0 \begin{cases} i \ge 3 \begin{cases} f_{i - 1, 0} & does not take \\\\ f_{i - 1, 2} + a_i \times 3 & takes \end{cases} \\\\ f_{i - 1, 0} & ; There are no up to 3, there are no such cases. \end{cases} \\\\ j = 1 \begin{cases} f_{i - 1, 1} & not take \\\\ f_{i - 1, 0} + a[i] & take \end{cases} \\\\ j = 2 \begin{cases} i \ge 2 \begin{cases} f_{i - 1, 2} & amp; don't take \\\\ f_{i - 1, 1} + a_i & take \end{cases} \\\\ f_{i - 1, 2} & no such case without at least 2 items. \end{cases} \end{cases}\)
The full code is:
#include <bits/stdc++.h>
using namespace std;
#define ll long long
int n;
ll a[100005];
/*
20PTS
ll dfs(int i, int now, ll cnt)
{
if (i == n + 1)
return cnt;
if (!((now + 1) % 3) && ((now + 1) >= 3))
return max(dfs(i + 1, now + 1, cnt + (a[i] * 3)), dfs(i + 1, now, cnt));
else
return max(dfs(i + 1, now + 1, cnt + a[i]), dfs(i + 1, now, cnt));
}
*/
ll f[100005][3];
ll ans;
int main()
{
cin >> n;
for (int i = 1; i <= n; i++)
cin >> a[i];
// cout << dfs(1, 0, 0) << "\n";
for (int i = 1; i <= n; i++)
{
f[i][0] = f[i - 1][0];
f[i][1] = f[i - 1][1];
f[i][2] = f[i - 1][2];
if (i >= 3)
f[i][0] = max(f[i][0], f[i - 1][2] + (a[i] * 3));
f[i][1] = max(f[i][1], f[i - 1][0] + a[i]);
if (i >= 2)
f[i][2] = max(f[i][2], f[i - 1][1] + a[i]);
ans = max(ans, f[i][0]);
ans = max(ans, f[i][1]);
ans = max(ans, f[i][2]);
}
cout << ans << "\n";
return 0;
}
Click on me to see the title
First, let's enjoy the original questioner's prompt.
Example 2 Preface: Categorized Discussion
After reading many inappropriate people's explanations, I condensed out:A categorized discussion is a categorized --> discussion!Categorization is the process of solving problems one by one by dividing them into categories with different outcomes/forms/differences.
Example 2 Ideas
Since we're talking about categorized discussions let's start with a category.
\(\max(\sum_{i = 1}^{N} A_i) = \begin{cases} C > 0 & \max(\sum_{i = L}^{R} A_i) \times C \\\\ C < 0 & \min(\sum_{i = L}^{R} A_i) \times C \end{cases}\)
How to use max-min\(O(N)\) Bingo! Maximum/minimum sub-segment sums are sufficient.
Just compare it at the end.
Full Code:
#include <bits/stdc++.h>
using namespace std;
#define ll long long
int n;
ll c;
ll a[100005];
ll solve()
{
ll original_sum = 0;
for (int i = 1; i <= n; ++i)
original_sum += a[i];
ll dp_max[100005], dp_min[100005];
dp_max[1] = a[1];
dp_min[1] = a[1];
ll maxx = dp_max[1];
ll minn = dp_min[1];
for (int i = 2; i <= n; i++)
{
dp_max[i] = max(a[i], dp_max[i - 1] + a[i]);
dp_min[i] = min(a[i], dp_min[i - 1] + a[i]);
maxx = max(maxx, dp_max[i]);
minn = min(minn, dp_min[i]);
}
ll res = max((c - 1) * maxx, (c - 1) * minn);
ll ans = original_sum + res;
return ans;
}
int main()
{
cin >> n >> c;
for (int i = 1; i <= n; ++i)
cin >> a[i];
cout << solve() << endl;
return 0;
}