Location>code7788 >text

The install base has finally broken a thousand! Talk about problems and solutions related to browser extension development

Popularity:697 ℃/2024-07-22 09:28:17

Problems and solutions related to browser extension development

The browser extension I developed has finally passed the thousandth install! InFirefox AddOns already have2.1k+Installation, in theChrome WebStore already have2k+Installation. In fact, theFirefoxin the extended market is the average weekly installation, the actual installation on the day is quite a bit higher than the average, and theChromeThe expanded market in over1kAfter the installed volume, the installed volume is not displayed accurately and the actual installed volume is higher than the1k

I actually implemented scripts to handle the functionality before I made the extension, and the scripts are in theGreasyFork possess2688k+There are two main reasons for implementing extensions: one is that I also wanted to learn about extension development, and I found that there are really application scenarios at work, especially when you have to break through browser limitations to do some special work; the other reason is that I found that there is a way to package and publish my extensions in theGreasyForkupperGPLThe code for the protocol is wrapped directly into a plugin and advertisements are added.400k+Installation volume.

So I also based on the scripting ability to implement the browser expansion, and is mainly for the sake of learning the case, I built the entire development environment from scratch, but also in dealing with a lot of compatibility programs, the next step we talk about related issues and solutions. Project AddressGitHub If you think it's good, give me a nod.starBar 😁.

Extended Packaging Solutions

We mentioned earlier that we are building a development environment from scratch here, so we need to pick an extension packaging tool, and here I've chosen therspack, and of course if we usewebpackorrollupIt's all fine, just userspackIt is more familiar and faster to pack, and the configuration is similar for either packer. Also, here we are actually using thebuildThe level of packaging is similar to thedevserverof the program inv3in which it is not currently very applicable.

Then it should be noted that in the browser extension we need to define more than one entry file, and need a single-file packaging scheme, not a single entry multiplechunkIncludesCSSWe also need to package the output as a single-entry, single-export, and the output filename should not have ahashsuffix to prevent the file from not being found. This is not a bigger problem though, just be aware of it in the config file.

 = {
  context: __dirname,
  entry: {
    popup: "./src/popup/",
    content: "./src/content/",
    worker: "./src/worker/",
    [INJECT_FILE]: "./src/inject/",
  },
  // ...
  output: {
    publicPath: "/",
    filename: "[name].js",
    path: (__dirname, folder),
  },
}

It can be found hereINJECT_FILEThe output filename is a dynamic one, in this case because theinjectScripts are required to be injected into the browser page, and because of the injection scheme, conflicts can occur on the browser page, so here we have every time thebuildThe generated filenames are inconsistent, and the filenames change after each release, including the simulated event communication scheme with consistent randomized names.

const EVENT_TYPE = isDev ? "EVENT_TYPE" : getUniqueId();
const INJECT_FILE = isDev ? "INJECT_FILE" : getUniqueId();

.EVENT_TYPE = EVENT_TYPE;
.INJECT_FILE = INJECT_FILE;
// ...

 = {
  context: __dirname,
  builtins: {
    define: {
      "__DEV__": (isDev),
      ".EVENT_TYPE": (.EVENT_TYPE),
      ".INJECT_FILE": (.INJECT_FILE),
      // ...
    }
  }
  // ...
}

Chrome is compatible with Firefox

ChromeThere has been a strong push to expand theV3version, which is themanifest_versionNeeds to be marked as3while inFirefoxSubmitted inmanifest_version: 3version will get a message that it's not recommended. In fact for personal use I don't like usingv3version, the restrictions are particularly high, many features are not properly implemented, this point we will talk about later. So since theChromecompulsory usev3Firefoxrecommendedv2Then we'll need to set up a separateChromiumkernel andGeckoCompatibility schemes are implemented in the kernel.

In fact, we can find out if this is very much like a multi-end build scenario, that is, we need to package the same code across multiple platforms. So the most common approach I use when dealing with some cross-platform compilation issues is thetogether with__DEV__, but after using it more, I realized that in this conditional compilation-like situation, extensive use of the === xxxIt's easy to have deep nesting problems, and readability can become poor, after all ourPromiseIt's all about solving the nested hell of asynchronous callbacks, and it doesn't always feel like a good solution if we continue to introduce nesting issues because we need to compile cross-platform.

existC/C++There is a very interesting preprocessor inC Preprocessoris not part of the compiler, but its a separate step in the compilation process, simply putC PreprocessorThe equivalent of a text replacement tool, e.g., macro parameters without identifiers are direct replacements for the original text, and the compiler can be instructed to complete the required preprocessing before the actual compilation.#include#define#ifdefAnd so on and so forth.C Preprocessorpreprocessor instructions, where we are mainly concerned with the part of the conditional compilation, the#if#endif#ifdef#endif#ifndef#endifand other conditional compilation directives.

#if VERBOSE >= 2
  print("trace message");
#endif

#ifdef __unix__ /* __unix__ is usually defined by compilers targeting Unix systems */
# include <>
#elif defined _WIN32 /* _WIN32 is usually defined by compilers targeting 32 or 64 bit Windows systems */
# include <>
#endif

Then we can do the same in a similar way with the help of a build tool, first of allC Preprocessoris a preprocessing tool that doesn't participate in the actual compile-time behavior, then isn't it very much like thewebpackhit the nail on the headloaderand the direct replacement of the original text we have inloaderin it is also perfectly doable, and something like the#ifdef#endifWe can realize this through the form of annotations, so that we can avoid deep nesting problems, and the logic related to string replacement is that you can directly modify the original to deal with, for example, those that do not meet the conditions of the platform can be removed, and those that meet the conditions of the platform can be retained, so that you can achieve something like the#ifdef#endifThe effect is now. In addition, it's still helpful to implement via annotations for certain complex scenarios, for example, I've encountered more complexSDKPackaging scenarios, internal and external as well as the behavior of the ontology project platform are inconsistent, if you do not build multiple packages, cross-platform requires the user to configure the build tool themselves, while the use of annotations can be used without configuring theloaderIn some cases, it is possible to avoid the need for users to change their configurations, but of course, this situation is still more deeply coupled in the business scenario, just to provide a reference for the situation.

// #IFDEF CHROMIUM
("IS IN CHROMIUM");
// #ENDIF

// #IFDEF GECKO
("IS IN GECKO");
// #ENDIF

At first I wanted to use the regular way of direct processing, but found that the processing is more cumbersome, especially the existence of nested cases, it is not very easy to deal with the logic, then later I thought that the code is anyway line by line logic, the way to deal with the line is the most convenient, especially in the process of processing because of its own is the comment, and ultimately are to be deleted, even if there is the case of indentation! directly remove the blanks before and after the direct match mark for processing. So that the idea becomes much simpler, preprocessing instructions start#IFDEFreplacetrueThe preprocessing instruction ends.#ENDIFreplacefalseThe ultimate goal is to remove the code, so it is sufficient to return blank lines of code that do not meet the conditional judgment, but we still need to pay attention to the processing of nested, we need a stack to record the current processing preprocessing instructions start#IFDEFindex into the stack when it encounters the#ENDIFand then out of the stack, and also need to record the current processing status, if the current processing status istrue, then when exiting the stack it is necessary to determine whether it is necessary to mark the current state of thefalsethus ending processing of the current block, and can also be used by thedebugto realize the generation of files for the hit module after processing.

// CURRENT PLATFORM: GECKO

// #IFDEF CHROMIUM
// some expressions... // remove
// #ENDIF

// #IFDEF GECKO
// some expressions... // retain
// #ENDIF

// #IFDEF CHROMIUM
// some expressions... // remove
// #IFDEF GECKO
// some expressions... // remove
// #ENDIF
// #ENDIF

// #IFDEF GECKO
// some expressions... // retain
// #IFDEF CHROMIUM
// some expressions... // remove
// #ENDIF
// #ENDIF

// #IFDEF CHROMIUM|GECKO
// some expressions... // retain
// #IFDEF GECKO
// some expressions... // retain
// #ENDIF
// #ENDIF
// ...
// Iterate to control whether the line hits the preprocessing condition or not
const platform = ([envKey] || "").toLowerCase();
let terser = false;
let revised = false;
let terserIndex = -1;
/** @type {number[]} */
const stack = [];
const lines = ("\n");
const target = ((line, index) => {
// Remove header and footer Remove header comments and whitespace symbols(selectable)
const code = ().replace(/^\/\/\s*/, "");
// Check preprocessing command start `#IFDEF`replace`true`
if (/^#IFDEF/.test(code)) {
  (index);
  // If it is`true`Just keep going.
  if (terser) return "";
  const match = ("#IFDEF", "").trim();
  const group = ("|").map(item => ().toLowerCase());
  if ((platform) === -1) {
    terser = true;
    revised = true;
    terserIndex = index;
  }
  return "";
}
// End of check preprocessing command `#IFDEF`replace`false`
if (/^#ENDIF$/.test(code)) {
  const index = ();
  // extra`#ENDIF`neglect
  if (index === undefined) return "";
  if (index === terserIndex) {
    terser = false;
    terserIndex = -1;
  }
  return "";
}
// Erase if preprocessing condition is hit
if (terser) return "";
  return line;
});
// ...

Then, in actual use, to call the registeredBadgeAs an example, byifBranching will just execute the code on different ends separately, and of course it's easy to just redefine the variables directly if you have similar definitions.

let env = chrome;
// #IFDEF GECKO
if (typeof browser !== "undefined") {
  env = browser;
}
// #ENDIF
export const cross = env;

// ...
let action: typeof  | typeof  = ;
// #IFDEF GECKO
action = ;
// #ENDIF
({ text: (), tabId });
({ color: "#4e5969", tabId });

Executes before the page Js code

An important feature of browser extensions is thedocument_startThis means that the code injected by the browser precedes the site's ownJscode execution, which leaves plenty of room for ourHookspace, imagine if we could run what we want to execute when the page actually loads.Jscode, wouldn't you be able to do whatever you want with the current page. Although we can'tHookThe autodimensions are created, but we always have to call the browser-suppliedAPIJust use theAPIcall, we can find a way to hijack the function call to get the data we want, for example, we can hijack thefunction call, and this function can be completed to a large extent on the need to rely on me to hijack the function in the whole page is to be the first to support, otherwise the function has been called over, then again hijacking is meaningless.

 = function (dynamic, ...args) {
  const context = Object(dynamic) || window;
  const symbol = Symbol();
  context[symbol] = this;
   === 2 && (args);
  try {
    const result = context[symbol](...args);
    delete context[symbol];
    return result;
  } catch (error) {
    ("Hook Call Error", error);
    (context, context[symbol], this, dynamic, args);
    return null;
  }
};

Then perhaps we will all want to realize the significance of this code in where, to cite a simple practice, in so-and-so library all the text is through thecanvasrendered because there is noDOMThen if we want to get the whole content of the document there is no way to copy it directly, so a viable option is to hijack thefunction, when the element created iscanvasWe can then get the canvas object in advance to get thectxand because the actual drawing of the text always requires a call to themethod, so hijacking to this method again allows us to take out the drawn text and immediately afterward create our ownDOMThe painting is elsewhere and can be duplicated if you want to.

So let's go back to the implementation of this problem, if we can guarantee that the script is executed first, then we can do almost anything at the language level, such as modifying thewindowObject,Hookfunction definitions, modifying prototype chains, blocking events, etc., etc., etc. Its own capabilities are also derived from browser extensions, and how to expose this capability of browser extensions to theWebpage is what the script manager needs to consider. So let's assume here that the user script is running on the browser page'sInject Scriptrather thanContent Script, based on this assumption, first of all we will most likely have written dynamic/asynchronous loadingJSThe script is implemented in a way similar to the following.

const loadScriptAsync = (url: string) => {
    return new Promise<Event>((resolve, reject) => {
        const script = ("script");
         = url;
         = true;
         = e => {
            ();
            resolve(e);
        };
         = e => {
            ();
            reject(e);
        };
        (script);
    });
};

So now there's the obvious question, if we're in thebodyThe label build is complete which is roughly the same as theDOMContentLoadedThe time to reload the script is definitely not up to datedocument-startof the goal, even if it is in theheadProcessing the tags after they're done doesn't work either, and many sites will be inheadInternal authoring componentJSresources, where loading the same timing is no longer appropriate, and in fact the biggest problem is still that the whole process is asynchronous, and before the whole external script is loaded there are already a lot ofJSThe code is executing, and it doesn't do what we want it to do, which is "execute first".

So download let's explore the exact implementation, starting with thev2The extension of thegeckoOn browsers with the kernel, the first thing to load for the entire page must be thehtmlThis tag, then it's obvious that we just need to put the script in thehtmlTag level insertion would be nice, in conjunction with the browser extension in thebackground(used form a nominal expression)Dynamic execution of code as well asContent Script(used form a nominal expression)"run_at": "document_start"Establish message communication to confirm the injectedtab, Doesn't this method seem so simple, but it's such a simple question that made me think long and hard about how it's done.

// Content Script --> Background
// Background -> 
(, {
  frameId: ,
  code: `(function(){
    let temp = ("http:///1999/xhtml", "script");
        ('type', 'text/javascript');
         = "${}";
         = "injected-js";
        (temp);
        ();
    }())`,
  runAt,
});

This one actually seems to have been pretty good, being able to basically dodocument-start, but since they are said to be basic, it means that there are still some cases where things can go wrong, let's take a closer look at the implementation of this code, where there is a communication also known as theContent Script --> BackgroundSince communication is asynchronous processing, asynchronous processing will consume time, once consumed then the user page may have executed a large amount of code, so this implementation will occasionally not be able to achieve thedocument-startcase, that is, it is actually the case that the script will fail.

So what's the solution to this problem in thev2What we can clearly know from theContent ScriptIt's totally controllable.document-startButContent Scriptis notInject ScriptThere is no way to access the page'swindowobject, there is no way to actually hijack the page function, then this problem seems very complex, in fact, after figuring out the solution is also very simple, we in the originalContent Scriptbased on the introduction of aContent ScriptAnd thisContent Scriptcode is exactly equivalent to the originalInject ScriptIt's just going to be hung up onwindowOn the top, we can write a plugin with the help of a packaging tool to accomplish this.

("WrapperCodePlugin", (compilation, done) => {
  ().forEach(key => {
    if (!isChromium && key === .INJECT_FILE + ".js") {
      try {
        const buffer = [key].source();
        let code = ("utf-8");
        code = `window.${.INJECT_FILE}=function(){${code}}`;
        [key] = {
          source() {
            return code;
          },
          size() {
            return ().length;
          },
        };
      } catch (error) {
        ("Parse Inject File Error", error);
      }
    }
  });
  done();
});

This code represents what we would do with the sameContent Script(used form a nominal expression)windowobject has a randomly generatedkeyHere is where we mentioned the potential for conflict, and the content is the script we actually want to inject into the page, but now that we have access to the function, how can we get it to execute on the user's page, which is actually using the samecreate script method, but the implementation here is very, very clever, as we pass twoContent Scriptbecome man and wifetoStringand injects it directly into the page as code, thus doing a truedocument-start

const fn = window[.INJECT_FILE as unknown as number] as unknown as () => void;
// #IFDEF GECKO
if (fn) {
  const script = ("http:///1999/xhtml", "script");
  ("type", "text/javascript");
   = `;(${()})();`;
  (script);
   = () => ();
  // eslint-disable-next-line @typescript-eslint/ban-ts-comment
  // @ts-ignore
  delete window[.INJECT_FILE];
}
// #ENDIF

It was also mentioned earlier that due to the factChromeThe browser no longer allowsv2extensions to submit, so we can only submitv3code, but thev3The code has a very strictCSPThe restrictions of the content security policy can simply be thought of as not allowing code to be executed dynamically, so all of the ways we described above fail, and we are left with writing code similar to the following.

const script = ("http:///1999/xhtml", "script");
("type", "text/javascript");
("src", (""));
(script);
 = () => ();

Although it seems that we are also inContent ScriptImmediately created in theScripttag and execute the code, and he was able to reach ourdocument-startUnfortunately, the answer is no. It works the first time the page is opened, but after that, because the script is actually getting an external script, theChromewill be the script and other pages on the page is also in a queuing state, and other scripts will have a strong cache in, so the actual performance is not necessarily who will be executed first, but this kind of instability we can not accept, certainly can not be done!document-startTarget. In fact from that alonev3It is not mature, a lot of capabilities of the support is not in place, so in the later official also made some programs to deal with this problem, but because we do not have any way to determine the user's client's browser version, so a lot of compatibility methods still need to be dealt with.

export const implantScript = () => {
  /**  RUN INJECT SCRIPT IN DOCUMENT START **/
  // #IFDEF CHROMIUM
  // /p/chromium/issues/detail?id=634381
  // /questions/75495191/chrome-extension-manifest-v3-how-to-use-window-addeventlistener
  if ( && ) {
    ("Register Inject Scripts By Scripting API");
    // /en-US/docs/Mozilla/Add-ons/WebExtensions/API/scripting/registerContentScripts
    
      .registerContentScripts([
        {
          matches: [...URL_MATCH],
          runAt: "document_start",
          world: "MAIN",
          allFrames: true,
          js: [.INJECT_FILE + ".js"],
          id: .INJECT_FILE,
        },
      ])
      .catch(err => {
        ("Register Inject Scripts Failed", err);
      });
  } else {
    ("Register Inject Scripts By Tabs API");
    // /en-US/docs/Mozilla/Add-ons/WebExtensions/API/tabs/onUpdated
    ((_, changeInfo, tab) => {
      if ( == "loading") {
        const tabId = tab && ;
        const tabURL = tab && ;
        if (tabURL && !URL_MATCH.some(match => new RegExp(match).test(tabURL))) {
          return void 0;
        }
        if (tabId && ) {
          ({
            target: { tabId: tabId, allFrames: true },
            files: [.INJECT_FILE + ".js"],
            injectImmediately: true,
          });
        }
      }
    });
  }
  // #ENDIF
  // #IFDEF GECKO
  ("Register Inject Scripts By Content Script Inline Code");
  // #ENDIF
};

existChrome V109I've supported it since then.Chrome 111Supported directly in theManifesthit the nail on the headworld: 'MAIN'script, but the compatibility of this still needs to be done by the developer, especially if the original browser does not support theworld: 'MAIN'If this script is used as aContent Scriptdealt with, about which I still find it a bit difficult to deal with.

Static resource handling

Imagine that many of our resource references are handled as strings, for example, in thehit the nail on the headiconsreferences, which are string references unlike ourWebThe application will refer to the resource according to the actual path, so in this case the resource will not be used as the actual reference by the packaging tool, which means that when we modify the resource, it will not trigger the packaging tool'sHMR

Therefore, for this part we need to manually incorporate it into the packaged dependencies, in addition to copying the relevant files to the packaged target folder. This is actually not a complicated task, we just need to implement the plugin to do this, where we need to deal with static resources such as images in addition to thelocalesHandled as a language file.

 = class FilesPlugin {
  apply(compiler) {
    ("FilesPlugin", compilation => {
      const resources = ("public/static");
      !(resources) &&
        (resources);
    });

    ("FilesPlugin", () => {
      const locales = ("public/locales/");
      const resources = ("public/static/");

      const folder = isGecko ? "build-gecko" : "build";
      const localesTarget = (`${folder}/_locales/`);
      const resourcesTarget = (`${folder}/static/`);

      return ([
        exec(`cp -r ${locales} ${localesTarget}`),
        exec(`cp -r ${resources} ${resourcesTarget}`),
      ]);
    });
  }
};

Generate Manifest

On the previously mentioned issue of handling static resources, for theThe same exists on the generation of the file, which we also need to use as thecontextDependenciesRegister to the packaging tool. Also, remember that before we needed to be compatible with theChromiumcap (a poem)GeckoWell, we're dealing with it.We definitely don't want to have two configuration files to do this, so we can use thets-nodeto dynamically generate, so that we can dynamically write the configuration file in through various logic.

 = class ManifestPlugin {
  constructor() {
    ();
     = (`src/manifest/`);
  }

  apply(compiler) {
    ("ManifestPlugin", compilation => {
      const manifest = ;
      !(manifest) && (manifest);
    });

    ("ManifestPlugin", () => {
      delete [()];
      const manifest = require();
      const version = require(("")).version;
       = version;
      const folder = isGecko ? "build-gecko" : "build";
      return writeFile((`${folder}/`), (manifest, null, 2));
    });
  }
};
const __URL_MATCH__ = ["https://*/*", "http://*/*", "file://*/*"];

// Chromium
const __MANIFEST__: Record<string, unknown> = {
  manifest_version: 3,
  name: "Force Copy",
  version: "0.0.0",
  description: "Force Copy Everything",
  default_locale: "en",
  icons: {
    32: "./static/favicon.",
    96: "./static/favicon.",
    128: "./static/favicon.",
  },
  // ...
  permissions: ["activeTab", "tabs", "scripting"],
  minimum_chrome_version: "88.0",
};

// Gecko
if ( === "gecko") {
  __MANIFEST__.manifest_version = 2;
  // ...
  __MANIFEST__.permissions = ["activeTab", "tabs", ...__URL_MATCH__];
  __MANIFEST__.browser_specific_settings = {
    gecko: { strict_min_version: "91.1.0" },
    gecko_android: { strict_min_version: "91.1.0" },
  };

  delete __MANIFEST__.action;
  delete __MANIFEST__.host_permissions;
  delete __MANIFEST__.minimum_chrome_version;
  delete __MANIFEST__.web_accessible_resources;
}

 = __MANIFEST__;

Event communication program

There are many modules in the browser extension, the common ones arebackground/workerpopupcontentinjectdevtoolsetc., different modules correspond to different roles, and collaboration constitutes the extended functionality of the plugin. Then obviously due to the existence of a variety of modules, each module is responsible for different functions, we need to complete the communication capabilities of the associated modules.

Since the entire program is run byTSWe prefer to implement a well-typed communication scheme, especially when the implementation of complex functionality, static type checking can help us avoid many problems.Popupuntil (a time)ContentAs an example to make a unified scheme for data communication, in the extension we need to design relevant classes for each module that needs to communicate.

First we need to define the communicationkeyvalue, because we need to pass thetypeto determine the type of information passed in this communication, and to prevent value conflicts, we pass thereduceFor ourkeyvalues add some complexity.

const PC_REQUEST_TYPE = ["A", "B"] as const;
export const POPUP_TO_CONTENT_REQUEST = PC_REQUEST_TYPE.reduce(
  (acc, cur) => ({ ...acc, [cur]: `__${cur}__${MARK}__` }),
  {} as { [K in typeof PC_REQUEST_TYPE[number]]: `__${K}__${typeof MARK}__` }
);

If we had usedreduxIf you do, you may run into a problem with thetypeHow to followpayloadThe types carried are aligned, e.g., we want to be able to align the types when thetypebeAof the time.TSBeing able to automatically deducepayloadThe type of the{ x: number }And iftypebeBof the time.TSBeing able to automatically infer the type{ y: string }The simpler declarative scheme for this example would be as follows.

type Tuple =
  | {
      type: "A";
      payload: { x: number };
    }
  | {
      type: "B";
      payload: { y: string };
    };
    
const pick = (data: Tuple) => {
  switch () {
    case "A":
      return ; // number
    case "B":
      return ; // string
  }
};

It's not really elegant to write this way, and we would probably prefer to have more elegant type declarations, so of course we can do this with the help of paradigms. However, we may not be able to do this in one step, we need to do the type declarations separately.type -> payloadtypes ofMap, which expresses the mapping relationship, after which it is converted into atype -> { type: T, payload: Map[T] }structure, and then take theTupleReady to go.

type Map = {
  A: { x: number };
  B: { y: string };
};

type ToReflectMap<T extends string, M extends Record<string, unknown>> = {
  [P in T]: { type: unknown extends M[P] ? never : P; payload: M[P] };
};

type ReflectMap = ToReflectMap<keyof Map, Map>;

type Tuple = ReflectMap[keyof ReflectMap];

So we can now encapsulate it into anamespacein it, as well as some basic type data conversion methods to make it easier for us to call.

export namespace Object {
  export type Keys<T extends Record<string, unknown>> = keyof T;

  export type Values<T extends Record<symbol | string | number, unknown>> = T[keyof T];
}

export namespace String {
  export type Map<T extends string> = { [P in T]: P };
}

export namespace EventReflect {
  export type Array<T, M extends Record<string, unknown>> = T extends string
    ? [type: unknown extends M[T] ? never : T, payload: M[T]]
    : never;

  export type Map<T extends string, M extends Record<string, unknown>> = {
    [P in T]: { type: unknown extends M[P] ? never : P; payload: M[P] };
  };

  export type Tuple<
    T extends Record<string, string>,
    M extends Record<string, unknown>
  > = <Map<<T>, M>>;
}

type Tuple = <<keyof Map>, Map>;

In fact, to make our function calls easier, we can also do the same with the parameters, by reasto the desired parameter type is sufficient.

type Map = {
  A: { x: number };
  B: { y: string };
};

type Args = <keyof Map, Map>;

declare function post(...args: Args): null;

post("A", { x: 2 });
post("B", { y: "" });

In order to be clear about our type expression, here we don't use the form of function parameters for the time being, and still use the objecttype -> payloadThe type of the request is labeled in the form of the So since we have defined the type of the request here, we then need to define the type of the data returned in the response, and to make it easier for the data to be expressed with strict typing, we will similarly represent the returned data astype -> payloadform, and of course the response heretypeand at the time of the requesttypeIt's consistent.

type EventMap = {
  [POPUP_TO_CONTENT_REQUEST.A]: { [K in PCQueryAType]: boolean };
};

export type PCResponseType = <<keyof EventMap>, EventMap>;

Next we'll define the entire event communicationBridgeSince at this point we arePopuptowardContentsends data, then we have to specify to which currentTabsend data, so here you need to query the currently activeTabThe The data communication is then done using themethod when receiving a message.. And because of the multiple communication channels that may exist, we also need to determine the source of this message, where we do this by sending thekeyJust judge.

One thing to note here is that even though the extended definition of thesendResponseis a response to asynchronous data, but in the actual testing process found that this function can not be called asynchronously, that is, this function must be immediately executed in response to the callback, which said asynchronous refers to the entire process of event communication is asynchronous, so here we are in the form of data return response to the definition.

export class PCBridge {
  public static readonly REQUEST = POPUP_TO_CONTENT_REQUEST;

  static async postToContent(data: PCRequestType) {
    return new Promise<PCResponseType | null>(resolve => {
      
        .query({ active: true, currentWindow: true })
        .then(tabs => {
          const tab = tabs[0];
          const tabId = tab && ;
          const tabURL = tab && ;
          if (tabURL && !URL_MATCH.some(match => new RegExp(match).test(tabURL))) {
            resolve(null);
            return void 0;
          }
          if (!isEmptyValue(tabId)) {
            (tabId, data).then(resolve);
          } else {
            resolve(null);
          }
        })
        .catch(error => {
          ("Send Message Error", error);
        });
    });
  }

  static onPopupMessage(cb: (data: PCRequestType) => void | PCResponseType) {
    const handler = (
      request: PCRequestType,
      _: ,
      sendResponse: (response: PCResponseType | null) => void
    ) => {
      const response = cb(request);
      response &&  ===  && sendResponse(response);
    };
    (handler);
    return () => {
      (handler);
    };
  }

  static isPCRequestType(data: PCRequestType): data is PCRequestType {
    return data &&  && (`__${MARK}__`);
  }
}

In addition, incontentconsultations withinjectCommunication requires a relatively special package inContent Scripthit the nail on the headDOMand event streams are the same as theInject Scriptshared, then there are actually two ways we can implement the communication: the

  • The first method we commonly use is + , except that one of the obvious problems with this approach is that in theWebpage can also receive our message, even though we can generate some randomizedtokento verify the source of the message, but this is not secure enough to be intercepted by the page itself very easily.
  • The other way i.e. + + CustomEventThe way to customize the event, here we need to pay attention to the event name to be randomized, by injecting the framework at the time of the injection in thebackgroundGenerate a unique random event name, after which theContent Scripttogether withInject ScriptBy communicating using this event name, you can prevent users from intercepting messages generated during method invocations.

It is important to note that all data types transferred must be serializable, if they are not serializable the data types in theGeckoobjects that would be considered cross-domain in the kernel's browser, after all, do in fact span differentContextup, otherwise it would be equivalent to sharing memory directly.

// Content Script
("xxxxxxxxxxxxx" + "content", e => {
    ("From Inject Script", );
});

// Inject Script
("xxxxxxxxxxxxx" + "inject", e => {
    ("From Content Script", );
});

// Inject Script
(
    new CustomEvent("xxxxxxxxxxxxx" + "content", {
        detail: { message: "call api" },
    }),
);

// Content Script
(
    new CustomEvent("xxxxxxxxxxxxx" + "inject", {
        detail: { message: "return value" },
    }),
);

Hot Updates Program

In the previous section we've been mentioning Google's strong push for thev3There are a lot of limitations, and one of the big ones is itsCSP - Content Security Policyno longer allows dynamic execution of code, then a program such as ourDevServer(used form a nominal expression)HMRNone of the tools work properly anymore, but hot updating is a feature we actually need, so we have to go with a not-so-perfect solution.

We can write a plugin for a packaging tool that utilizes theStart aWebSocketserver, after which theThat's what we're going to start.Service Workerto connectWebSocketserver. The server can then be accessed via thenew WebSocketto the link and is listening for messages when it receives thereloadAfter the message, we can execute the()to enable plugin reloading now.

Then in the opening of theWebSocketThe server needs to be compiled every time after, for example, theafterDonethis onehookSends the client thereloadmessage, which would allow for a simple plugin reloading capability. But in practice this introduces another problem in that thev3versionsService WorkerIt won't be permanent, so thisWebSocketThe link will also go with theService Workerdestruction by the destruction of the same, is the more pitiful point, and again because of this a large number ofChromeExtensions are not available from thev2Smooth transition tov3So there is a possibility that this ability could be improved subsequently.

 = class ReloadPlugin {
  constructor() {
    if (isDev) {
      try {
        const server = new WebSocketServer({ port: 3333 });
        ("connection", client => {
          wsClient && ();
          wsClient = client;
          ("Client Connected");
        });
      } catch (error) {
        ("Auto Reload Server Error", error);
      }
    }
  }
  apply(compiler) {
    ("ReloadPlugin", () => {
      wsClient && ("reload-app");
    });
  }
};
export const onReceiveReloadMsg = () => {
  if (__DEV__) {
    try {
      const ws = new WebSocket("ws://localhost:3333");
      // Overloading a message as it is received
       = () => {
        try {
          ({ type: , payload: null });
        } catch (error) {
          ("SEND MESSAGE ERROR", error);
        }
      };
    } catch (error) {
      ("CONNECT ERROR", error);
    }
  }
};

export const onContentMessage = (data: CWRequestType, sender: ) => {
  ("Worker Receive Content Message", data);
  switch () {
    case : {
      reloadApp(RELOAD_APP);
      break;
    }
    // ...
  }
  return null;
};

Popup Multilingual

One of the more interesting things is that the multi-language solution provided by the browser doesn't actually work very well, and we're in thelocalsThe files stored in are really just placeholders to allow the extension market to recognize the languages supported by our browser extensions, while the actual multilingualism is in ourPopupImplemented on its own in, for example, thepackages/force-copy/public/locales/zh_CNThe data in the table are as follows.

{
  "name": {
    "message": "Force Copy"
  }
}

Then in fact there are many front-end multilingual solutions, here because our extension program will not have too much need to focus on multilingual content, after all, just aPopuplayer, if a separateIt is necessary to use the community's multi-language program if you want to use a page that is not in your language. But we'll just keep it simple here.

First of all, the types are complete, in our expansion we are using English as the base language, so the configuration is also set up with English as the base language. And since we want to have a better grouping scheme, there may be deeper nested structures here, so the types must be complete to splice them out to support our multi-language.

export const en = {
  Title: "Force Copy",
  Captain: {
    Modules: "Modules",
    Start: "Start",
    Once: "Once",
  },
  Operation: {
    Copy: "Copy",
    Keyboard: "Keyboard",
    ContextMenu: "ContextMenu",
  },
  Information: {
    GitHub: "GitHub",
    Help: "Help",
    Reload: "Reload",
  },
};
export type DefaultI18nConfig = typeof en;

export type ConfigBlock = {
  [key: string]: string | ConfigBlock;
};
type FlattenKeys<T extends ConfigBlock, Key = keyof T> = Key extends string
  ? T[Key] extends ConfigBlock
    ? `${Key}.${FlattenKeys<T[Key]>}`
    : `${Key}`
  : never;
export type I18nTypes = Record<FlattenKeys<DefaultI18nConfig>, string>;

Immediately after that we defineI18nclass and the global cache of the language in theI18nThe class implements functions for function calls, multi-language configuration on-demand generation, and multi-language configuration acquisition, which are directly instantiated at the time of the call to thenew I18n(cross.());abstraction("")Ready to go.

const cache: Record<string, I18nTypes> = {};

export class I18n {
  private config: I18nTypes;
  constructor(language: string) {
     = (language);
  }

  t = (key: keyof I18nTypes, defaultValue = "") => {
    return [key] || defaultValue || key;
  };

  private static getFullConfig = (key: string) => {
    if (cache[key]) return cache[key];
    let config;
    if (().startsWith("zh")) {
      config = (zh);
    } else {
      config = (en);
    }
    cache[key] = config;
    return config;
  };

  private static generateFlattenConfig = (config: ConfigBlock): I18nTypes => {
    const target: Record<string, string> = {};
    const dfs = (obj: ConfigBlock, prefix: string[]) => {
      for (const [key, value] of (obj)) {
        if (isString(value)) {
          target[[...prefix, key].join(".")] = value;
        } else {
          dfs(value, [...prefix, key]);
        }
      }
    };
    dfs(config, []);
    return target as I18nTypes;
  };
}

ultimate

The development of browser extensions is still a complicated matter, especially when they need to be compatible with thev2cap (a poem)v3case, many designs need to think about whether they will work properly on thev3Realized on thev3The browser extensions in the U.S. lose a lot of flexibility, but they also gain a certain amount of security. Browser extensions are still inherently quite permissive, for example even thev3We can still be in theChromeprioritize the use of sth.CDP - Chrome DevTools Protocolto achieve a lot of things, extensions can do is too much, if you do not understand or not open source if you do not dare to install, because the extension is too high permissions may cause very serious such as user information leakage and other issues, even if it is, for example, like theFirefoxThat would necessitate uploading the source code in a way that would enhance auditing, and would hardly eliminate all the pitfalls.