Merge branch 'main' into update-docs

pull/1046/head
Tanuj Kumar Mishra 2023-01-12 16:48:49 +05:30 committed by GitHub
commit 003f5452bf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 1609 additions and 570 deletions

View File

@ -1,6 +1,6 @@
---
name: "@actions/cache"
version: 3.1.1
version: 3.1.2
type: npm
summary:
homepage:

141
README.md
View File

@ -14,9 +14,10 @@ In addition to this `cache` action, other two actions are also available
See ["Caching dependencies to speed up workflows"](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows).
## What's New
### v3
* Added support for caching from GHES 3.5.
* Fixed download issue for files > 2GB during restore.
* Updated the minimum runner version support from node 12 -> node 16.
@ -28,39 +29,48 @@ See ["Caching dependencies to speed up workflows"](https://docs.github.com/en/ac
* Fix zstd not working for windows on gnu tar in issues.
* Allowing users to provide a custom timeout as input for aborting download of a cache segment using an environment variable `SEGMENT_DOWNLOAD_TIMEOUT_MINS`. Default is 60 minutes.
* Two new actions available for granular control over caches - [restore](restore/action.yml) and [save](save/action.yml)
* Support cross-os caching as an opt-in feature. See [Cross OS caching](./tips-and-workarounds.md#cross-os-cache) for more info.
Refer [here](https://github.com/actions/cache/blob/v2/README.md) for previous versions
## Usage
### Pre-requisites
Create a workflow `.yml` file in your repositories `.github/workflows` directory. An [example workflow](#example-workflow) is available below. For more information, reference the GitHub Help Documentation for [Creating a workflow file](https://help.github.com/en/articles/configuring-a-workflow#creating-a-workflow-file).
Create a workflow `.yml` file in your repositories `.github/workflows` directory. An [example workflow](#example-cache-workflow) is available below. For more information, reference the GitHub Help Documentation for [Creating a workflow file](https://help.github.com/en/articles/configuring-a-workflow#creating-a-workflow-file).
If you are using this inside a container, a POSIX-compliant `tar` needs to be included and accessible in the execution path.
If you are using a `self-hosted` Windows runner, `GNU tar` and `zstd` are required for [Cross-OS caching](https://github.com/actions/cache/blob/main/tips-and-workarounds.md#cross-os-cache) to work. They are also recommended to be installed in general so the performance is on par with `hosted` Windows runners.
### Inputs
* `path` - A list of files, directories, and wildcard patterns to cache and restore. See [`@actions/glob`](https://github.com/actions/toolkit/tree/main/packages/glob) for supported patterns.
* `key` - An explicit key for restoring and saving the cache
* `key` - An explicit key for restoring and saving the cache. Refer [creating a cache key](#creating-a-cache-key).
* `restore-keys` - An ordered list of prefix-matched keys to use for restoring stale cache if no cache hit occurred for key.
* `enableCrossOsArchive` - An optional boolean when enabled, allows Windows runners to save or restore caches that can be restored or saved respectively on other platforms. Default: false
#### Environment Variables
* `SEGMENT_DOWNLOAD_TIMEOUT_MINS` - Segment download timeout (in minutes, default `60`) to abort download of the segment if not completed in the defined number of minutes. [Read more](https://github.com/actions/cache/blob/main/workarounds.md#cache-segment-restore-timeout)
* `SEGMENT_DOWNLOAD_TIMEOUT_MINS` - Segment download timeout (in minutes, default `60`) to abort download of the segment if not completed in the defined number of minutes. [Read more](https://github.com/actions/cache/blob/main/tips-and-workarounds.md#cache-segment-restore-timeout)
### Outputs
* `cache-hit` - A boolean value to indicate an exact match was found for the key.
* `cache-hit` - A boolean value to indicate an exact match was found for the key.
> Note: `cache-hit` will be set to `true` only when cache hit occurs for the exact `key` match. For a partial key match via `restore-keys` or a cache miss, it will be set to `false`.
See [Skipping steps based on cache-hit](#skipping-steps-based-on-cache-hit) for info on using this output
### Cache scopes
The cache is scoped to the key, [version](#cache-version) and branch. The default branch cache is available to other branches.
See [Matching a cache key](https://help.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key) for more info.
### Example workflow
### Example cache workflow
#### Restoring and saving cache using a single action
```yaml
name: Caching Primes
@ -89,7 +99,49 @@ jobs:
run: /primes.sh -d prime-numbers
```
> Note: You must use the `cache` action in your workflow before you need to use the files that might be restored from the cache. If the provided `key` matches an existing cache, a new cache is not created and if the provided `key` doesn't match an existing cache, a new cache is automatically created provided the job completes successfully.
The `cache` action provides one output `cache-hit` which is set to `true` when cache is restored using primary key and `false` when cache is restored using `restore-keys` or no cache is restored.
#### Using combination of restore and save actions
```yaml
name: Caching Primes
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Restore cached Primes
id: cache-primes-restore
- uses: actions/cache/restore@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-primes
.
. //intermediate workflow steps
.
- name: Save Primes
id: cache-primes-save
- uses: actions/cache/save@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ steps.cache-primes-restore.outputs.cache-primary-key }}
```
> **Note**
> You must use the `cache` or `restore` action in your workflow before you need to use the files that might be restored from the cache. If the provided `key` matches an existing cache, a new cache is not created and if the provided `key` doesn't match an existing cache, a new cache is automatically created provided the job completes successfully.
## Caching Strategies
With introduction of two new actions `restore` and `save`, a lot of caching use cases can now be achieved. Please refer the [caching strategies](./caching-strategies.md) document for understanding how you can use the actions strategically to achieve the desired goal.
## Implementation Examples
@ -97,30 +149,31 @@ Every programming language and framework has its own way of caching.
See [Examples](examples.md) for a list of `actions/cache` implementations for use with:
- [C# - NuGet](./examples.md#c---nuget)
- [Clojure - Lein Deps](./examples.md#clojure---lein-deps)
- [D - DUB](./examples.md#d---dub)
- [Deno](./examples.md#deno)
- [Elixir - Mix](./examples.md#elixir---mix)
- [Go - Modules](./examples.md#go---modules)
- [Haskell - Cabal](./examples.md#haskell---cabal)
- [Haskell - Stack](./examples.md#haskell---stack)
- [Java - Gradle](./examples.md#java---gradle)
- [Java - Maven](./examples.md#java---maven)
- [Node - npm](./examples.md#node---npm)
- [Node - Lerna](./examples.md#node---lerna)
- [Node - Yarn](./examples.md#node---yarn)
- [OCaml/Reason - esy](./examples.md#ocamlreason---esy)
- [PHP - Composer](./examples.md#php---composer)
- [Python - pip](./examples.md#python---pip)
- [Python - pipenv](./examples.md#python---pipenv)
- [R - renv](./examples.md#r---renv)
- [Ruby - Bundler](./examples.md#ruby---bundler)
- [Rust - Cargo](./examples.md#rust---cargo)
- [Scala - SBT](./examples.md#scala---sbt)
- [Swift, Objective-C - Carthage](./examples.md#swift-objective-c---carthage)
- [Swift, Objective-C - CocoaPods](./examples.md#swift-objective-c---cocoapods)
- [Swift - Swift Package Manager](./examples.md#swift---swift-package-manager)
* [C# - NuGet](./examples.md#c---nuget)
* [Clojure - Lein Deps](./examples.md#clojure---lein-deps)
* [D - DUB](./examples.md#d---dub)
* [Deno](./examples.md#deno)
* [Elixir - Mix](./examples.md#elixir---mix)
* [Go - Modules](./examples.md#go---modules)
* [Haskell - Cabal](./examples.md#haskell---cabal)
* [Haskell - Stack](./examples.md#haskell---stack)
* [Java - Gradle](./examples.md#java---gradle)
* [Java - Maven](./examples.md#java---maven)
* [Node - npm](./examples.md#node---npm)
* [Node - Lerna](./examples.md#node---lerna)
* [Node - Yarn](./examples.md#node---yarn)
* [OCaml/Reason - esy](./examples.md#ocamlreason---esy)
* [PHP - Composer](./examples.md#php---composer)
* [Python - pip](./examples.md#python---pip)
* [Python - pipenv](./examples.md#python---pipenv)
* [R - renv](./examples.md#r---renv)
* [Ruby - Bundler](./examples.md#ruby---bundler)
* [Rust - Cargo](./examples.md#rust---cargo)
* [Scala - SBT](./examples.md#scala---sbt)
* [Swift, Objective-C - Carthage](./examples.md#swift-objective-c---carthage)
* [Swift, Objective-C - CocoaPods](./examples.md#swift-objective-c---cocoapods)
* [Swift - Swift Package Manager](./examples.md#swift---swift-package-manager)
* [Swift - Mint](./examples.md#swift---mint)
## Creating a cache key
@ -164,6 +217,7 @@ A repository can have up to 10GB of caches. Once the 10GB limit is reached, olde
Using the `cache-hit` output, subsequent steps (such as install or build) can be skipped when a cache hit occurs on the key. It is recommended to install the missing/updated dependencies in case of a partial key match when the key is dependent on the `hash` of the package file.
Example:
```yaml
steps:
- uses: actions/checkout@v3
@ -181,11 +235,11 @@ steps:
> Note: The `id` defined in `actions/cache` must match the `id` in the `if` statement (i.e. `steps.[ID].outputs.cache-hit`)
## Cache Version
Cache version is a hash [generated](https://github.com/actions/toolkit/blob/500d0b42fee2552ae9eeb5933091fe2fbf14e72d/packages/cache/src/internal/cacheHttpClient.ts#L73-L90) for a combination of compression tool used (Gzip, Zstd, etc. based on the runner OS) and the `path` of directories being cached. If two caches have different versions, they are identified as unique caches while matching. This for example, means that a cache created on `windows-latest` runner can't be restored on `ubuntu-latest` as cache `Version`s are different.
> Pro tip: [List caches](https://docs.github.com/en/rest/actions/cache#list-github-actions-caches-for-a-repository) API can be used to get the version of a cache. This can be helpful to troubleshoot cache miss due to version.
Cache version is a hash [generated](https://github.com/actions/toolkit/blob/500d0b42fee2552ae9eeb5933091fe2fbf14e72d/packages/cache/src/internal/cacheHttpClient.ts#L73-L90) for a combination of compression tool used (Gzip, Zstd, etc. based on the runner OS) and the `path` of directories being cached. If two caches have different versions, they are identified as unique caches while matching. This for example, means that a cache created on `windows-latest` runner can't be restored on `ubuntu-latest` as cache `Version`s are different.
> Pro tip: [List caches](https://docs.github.com/en/rest/actions/cache#list-github-actions-caches-for-a-repository) API can be used to get the version of a cache. This can be helpful to troubleshoot cache miss due to version.
<details>
<summary>Example</summary>
@ -236,22 +290,27 @@ jobs:
if: steps.cache-primes.outputs.cache-hit != 'true'
run: ./generate-primes -d prime-numbers
```
</details>
## Known practices and workarounds
Following are some of the known practices/workarounds which community has used to fulfill specific requirements. You may choose to use them if suits your use case. Note these are not necessarily the only or the recommended solution.
- [Cache segment restore timeout](./tips-and-workarounds.md#cache-segment-restore-timeout)
- [Update a cache](./tips-and-workarounds.md#update-a-cache)
- [Use cache across feature branches](./tips-and-workarounds.md#use-cache-across-feature-branches)
- [Improving cache restore performance on Windows/Using cross-os caching](./tips-and-workarounds.md#improving-cache-restore-performance-on-windows-using-cross-os-caching)
- [Force deletion of caches overriding default cache eviction policy](./tips-and-workarounds.md#force-deletion-of-caches-overriding-default-cache-eviction-policy)
* [Cache segment restore timeout](./tips-and-workarounds.md#cache-segment-restore-timeout)
* [Update a cache](./tips-and-workarounds.md#update-a-cache)
* [Use cache across feature branches](./tips-and-workarounds.md#use-cache-across-feature-branches)
* [Cross OS cache](./tips-and-workarounds.md#cross-os-cache)
* [Force deletion of caches overriding default cache eviction policy](./tips-and-workarounds.md#force-deletion-of-caches-overriding-default-cache-eviction-policy)
#### Windows environment variables
Please note that Windows environment variables (like `%LocalAppData%`) will NOT be expanded by this action. Instead, prefer using `~` in your paths which will expand to HOME directory. For example, instead of `%LocalAppData%`, use `~\AppData\Local`. For a list of supported default environment variables, see [this](https://docs.github.com/en/actions/learn-github-actions/environment-variables) page.
### Windows environment variables
Please note that Windows environment variables (like `%LocalAppData%`) will NOT be expanded by this action. Instead, prefer using `~` in your paths which will expand to HOME directory. For example, instead of `%LocalAppData%`, use `~\AppData\Local`. For a list of supported default environment variables, see [this](https://docs.github.com/en/actions/learn-github-actions/environment-variables) page.
## Contributing
We would love for you to contribute to `actions/cache`, pull requests are welcome! Please see the [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
## License
The scripts and documentation in this project are released under the [MIT License](LICENSE)

View File

@ -62,4 +62,8 @@
- Added logs for cache version in case of a cache miss.
### 3.2.2
- Reverted the changes made in 3.2.1 to use gnu tar and zstd by default on windows.
- Reverted the changes made in 3.2.1 to use gnu tar and zstd by default on windows.
### 3.2.3
- Support cross os caching on Windows as an opt-in feature.
- Fix issue with symlink restoration on Windows for cross-os caches.

View File

@ -174,6 +174,26 @@ test("getInputAsInt throws if required and value missing", () => {
).toThrowError();
});
test("getInputAsBool returns false if input not set", () => {
expect(actionUtils.getInputAsBool("undefined")).toBe(false);
});
test("getInputAsBool returns value if input is valid", () => {
testUtils.setInput("foo", "true");
expect(actionUtils.getInputAsBool("foo")).toBe(true);
});
test("getInputAsBool returns false if input is invalid or NaN", () => {
testUtils.setInput("foo", "bar");
expect(actionUtils.getInputAsBool("foo")).toBe(false);
});
test("getInputAsBool throws if required and value missing", () => {
expect(() =>
actionUtils.getInputAsBool("undefined2", { required: true })
).toThrowError();
});
test("isCacheFeatureAvailable for ac enabled", () => {
jest.spyOn(cache, "isFeatureAvailable").mockImplementation(() => true);

View File

@ -27,9 +27,17 @@ beforeAll(() => {
return actualUtils.getInputAsArray(name, options);
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
const actualUtils = jest.requireActual("../src/utils/actionUtils");
return actualUtils.getInputAsBool(name, options);
}
);
});
beforeEach(() => {
jest.restoreAllMocks();
process.env[Events.Key] = Events.Push;
process.env[RefKey] = "refs/heads/feature-branch";
@ -50,7 +58,8 @@ test("restore with no cache found", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -65,7 +74,7 @@ test("restore with no cache found", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(stateMock).toHaveBeenCalledTimes(1);
@ -84,7 +93,8 @@ test("restore with restore keys and no cache found", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -99,7 +109,13 @@ test("restore with restore keys and no cache found", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(stateMock).toHaveBeenCalledTimes(1);
@ -116,7 +132,8 @@ test("restore with cache found for key", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -132,7 +149,7 @@ test("restore with cache found for key", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(stateMock).toHaveBeenCalledWith("CACHE_RESULT", key);
@ -152,7 +169,8 @@ test("restore with cache found for restore key", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -168,7 +186,13 @@ test("restore with cache found for restore key", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(stateMock).toHaveBeenCalledWith("CACHE_RESULT", restoreKey);

View File

@ -28,9 +28,17 @@ beforeAll(() => {
return actualUtils.getInputAsArray(name, options);
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
const actualUtils = jest.requireActual("../src/utils/actionUtils");
return actualUtils.getInputAsBool(name, options);
}
);
});
beforeEach(() => {
jest.restoreAllMocks();
process.env[Events.Key] = Events.Push;
process.env[RefKey] = "refs/heads/feature-branch";
@ -97,7 +105,8 @@ test("restore on GHES with AC available ", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -113,7 +122,7 @@ test("restore on GHES with AC available ", async () => {
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(setCacheHitOutputMock).toHaveBeenCalledTimes(1);
@ -152,13 +161,20 @@ test("restore with too many keys should fail", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys
restoreKeys,
enableCrossOsArchive: false
});
const failedMock = jest.spyOn(core, "setFailed");
const restoreCacheMock = jest.spyOn(cache, "restoreCache");
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, restoreKeys);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
restoreKeys,
{},
false
);
expect(failedMock).toHaveBeenCalledWith(
`Key Validation Error: Keys are limited to a maximum of 10.`
);
@ -169,13 +185,14 @@ test("restore with large key should fail", async () => {
const key = "foo".repeat(512); // Over the 512 character limit
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const failedMock = jest.spyOn(core, "setFailed");
const restoreCacheMock = jest.spyOn(cache, "restoreCache");
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(failedMock).toHaveBeenCalledWith(
`Key Validation Error: ${key} cannot be larger than 512 characters.`
);
@ -186,13 +203,14 @@ test("restore with invalid key should fail", async () => {
const key = "comma,comma";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const failedMock = jest.spyOn(core, "setFailed");
const restoreCacheMock = jest.spyOn(cache, "restoreCache");
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(failedMock).toHaveBeenCalledWith(
`Key Validation Error: ${key} cannot contain commas.`
);
@ -203,7 +221,8 @@ test("restore with no cache found", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -218,7 +237,7 @@ test("restore with no cache found", async () => {
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(failedMock).toHaveBeenCalledTimes(0);
@ -235,7 +254,8 @@ test("restore with restore keys and no cache found", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -250,7 +270,13 @@ test("restore with restore keys and no cache found", async () => {
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(failedMock).toHaveBeenCalledTimes(0);
@ -265,7 +291,8 @@ test("restore with cache found for key", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -281,7 +308,7 @@ test("restore with cache found for key", async () => {
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(setCacheHitOutputMock).toHaveBeenCalledTimes(1);
@ -298,7 +325,8 @@ test("restore with cache found for restore key", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -314,7 +342,13 @@ test("restore with cache found for restore key", async () => {
await run(new StateProvider());
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(stateMock).toHaveBeenCalledWith("CACHE_KEY", key);
expect(setCacheHitOutputMock).toHaveBeenCalledTimes(1);

View File

@ -27,9 +27,18 @@ beforeAll(() => {
return actualUtils.getInputAsArray(name, options);
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
return jest
.requireActual("../src/utils/actionUtils")
.getInputAsBool(name, options);
}
);
});
beforeEach(() => {
jest.restoreAllMocks();
process.env[Events.Key] = Events.Push;
process.env[RefKey] = "refs/heads/feature-branch";
@ -50,7 +59,8 @@ test("restore with no cache found", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -65,7 +75,7 @@ test("restore with no cache found", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(outputMock).toHaveBeenCalledWith("cache-primary-key", key);
expect(outputMock).toHaveBeenCalledTimes(1);
@ -83,7 +93,8 @@ test("restore with restore keys and no cache found", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -98,7 +109,13 @@ test("restore with restore keys and no cache found", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(outputMock).toHaveBeenCalledWith("cache-primary-key", key);
expect(failedMock).toHaveBeenCalledTimes(0);
@ -113,7 +130,8 @@ test("restore with cache found for key", async () => {
const key = "node-test";
testUtils.setInputs({
path: path,
key
key,
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -128,7 +146,7 @@ test("restore with cache found for key", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, []);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [], {}, false);
expect(outputMock).toHaveBeenCalledWith("cache-primary-key", key);
expect(outputMock).toHaveBeenCalledWith("cache-hit", "true");
@ -147,7 +165,8 @@ test("restore with cache found for restore key", async () => {
testUtils.setInputs({
path: path,
key,
restoreKeys: [restoreKey]
restoreKeys: [restoreKey],
enableCrossOsArchive: false
});
const infoMock = jest.spyOn(core, "info");
@ -162,7 +181,13 @@ test("restore with cache found for restore key", async () => {
await run();
expect(restoreCacheMock).toHaveBeenCalledTimes(1);
expect(restoreCacheMock).toHaveBeenCalledWith([path], key, [restoreKey]);
expect(restoreCacheMock).toHaveBeenCalledWith(
[path],
key,
[restoreKey],
{},
false
);
expect(outputMock).toHaveBeenCalledWith("cache-primary-key", key);
expect(outputMock).toHaveBeenCalledWith("cache-hit", "false");

View File

@ -35,6 +35,14 @@ beforeAll(() => {
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
return jest
.requireActual("../src/utils/actionUtils")
.getInputAsBool(name, options);
}
);
jest.spyOn(actionUtils, "isExactKeyMatch").mockImplementation(
(key, cacheResult) => {
return jest
@ -95,9 +103,14 @@ test("save with valid inputs uploads a cache", async () => {
await run();
expect(saveCacheMock).toHaveBeenCalledTimes(1);
expect(saveCacheMock).toHaveBeenCalledWith([inputPath], primaryKey, {
uploadChunkSize: 4000000
});
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
{
uploadChunkSize: 4000000
},
false
);
expect(failedMock).toHaveBeenCalledTimes(0);
});

View File

@ -32,6 +32,14 @@ beforeAll(() => {
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
return jest
.requireActual("../src/utils/actionUtils")
.getInputAsBool(name, options);
}
);
jest.spyOn(actionUtils, "isExactKeyMatch").mockImplementation(
(key, cacheResult) => {
return jest
@ -47,6 +55,7 @@ beforeAll(() => {
});
beforeEach(() => {
jest.restoreAllMocks();
process.env[Events.Key] = Events.Push;
process.env[RefKey] = "refs/heads/feature-branch";
@ -155,9 +164,14 @@ test("save on GHES with AC available", async () => {
await run(new StateProvider());
expect(saveCacheMock).toHaveBeenCalledTimes(1);
expect(saveCacheMock).toHaveBeenCalledWith([inputPath], primaryKey, {
uploadChunkSize: 4000000
});
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
{
uploadChunkSize: 4000000
},
false
);
expect(failedMock).toHaveBeenCalledTimes(0);
});
@ -251,7 +265,8 @@ test("save with large cache outputs warning", async () => {
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
expect.anything()
expect.anything(),
false
);
expect(logWarningMock).toHaveBeenCalledTimes(1);
@ -297,7 +312,8 @@ test("save with reserve cache failure outputs warning", async () => {
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
expect.anything()
expect.anything(),
false
);
expect(logWarningMock).toHaveBeenCalledWith(
@ -339,7 +355,8 @@ test("save with server error outputs warning", async () => {
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
expect.anything()
expect.anything(),
false
);
expect(logWarningMock).toHaveBeenCalledTimes(1);
@ -378,9 +395,14 @@ test("save with valid inputs uploads a cache", async () => {
await run(new StateProvider());
expect(saveCacheMock).toHaveBeenCalledTimes(1);
expect(saveCacheMock).toHaveBeenCalledWith([inputPath], primaryKey, {
uploadChunkSize: 4000000
});
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
{
uploadChunkSize: 4000000
},
false
);
expect(failedMock).toHaveBeenCalledTimes(0);
});

View File

@ -35,6 +35,14 @@ beforeAll(() => {
}
);
jest.spyOn(actionUtils, "getInputAsBool").mockImplementation(
(name, options) => {
return jest
.requireActual("../src/utils/actionUtils")
.getInputAsBool(name, options);
}
);
jest.spyOn(actionUtils, "isExactKeyMatch").mockImplementation(
(key, cacheResult) => {
return jest
@ -85,9 +93,14 @@ test("save with valid inputs uploads a cache", async () => {
await run();
expect(saveCacheMock).toHaveBeenCalledTimes(1);
expect(saveCacheMock).toHaveBeenCalledWith([inputPath], primaryKey, {
uploadChunkSize: 4000000
});
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
{
uploadChunkSize: 4000000
},
false
);
expect(failedMock).toHaveBeenCalledTimes(0);
});
@ -112,9 +125,14 @@ test("save failing logs the warning message", async () => {
await run();
expect(saveCacheMock).toHaveBeenCalledTimes(1);
expect(saveCacheMock).toHaveBeenCalledWith([inputPath], primaryKey, {
uploadChunkSize: 4000000
});
expect(saveCacheMock).toHaveBeenCalledWith(
[inputPath],
primaryKey,
{
uploadChunkSize: 4000000
},
false
);
expect(warningMock).toHaveBeenCalledTimes(1);
expect(warningMock).toHaveBeenCalledWith("Cache save failed.");

View File

@ -14,6 +14,10 @@ inputs:
upload-chunk-size:
description: 'The chunk size used to split up large files during upload, in bytes'
required: false
enableCrossOsArchive:
description: 'An optional boolean when enabled, allows windows runners to save or restore caches that can be restored or saved respectively on other platforms'
default: 'false'
required: false
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'

297
caching-strategies.md Normal file
View File

@ -0,0 +1,297 @@
# Caching Strategies
This document lists some of the strategies (and example workflows if possible) which can be used
- to use an effective cache key and/or path
- to solve some common use cases around saving and restoring caches
- to leverage the step inputs and outputs more effectively
## Choosing the right key
```yaml
jobs:
build:
runs-on: ubuntu-latest
- uses: actions/cache@v3
with:
key: ${{ some-metadata }}-cache
```
In your workflows, you can use different strategies to name your key depending on your use case so that the cache is scoped appropriately for your need. For example, you can have cache specific to OS, or based on the lockfile or commit SHA or even workflow run.
### Updating cache for any change in the dependencies
One of the most common use case is to use hash for lockfile as key. This way, same cache will be restored for a lockfile until there's a change in dependencies listed in lockfile.
```yaml
- uses: actions/cache@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: cache-${{ hashFiles('**/lockfiles') }}
```
### Using restore keys to download the closest matching cache
If cache is not found matching the primary key, restore keys can be used to download the closest matching cache that was recently created. This ensures that the build/install step will need to additionally fetch just a handful of newer dependencies, and hence saving build time.
```yaml
- uses: actions/cache@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: cache-npm-${{ hashFiles('**/lockfiles') }}
restore-keys: |
cache-npm-
```
The restore keys can be provided as a complete name, or a prefix, read more [here](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key) on how a cache key is matched using restore keys.
### Separate caches by Operating System
In case of workflows with matrix running for multiple Operating Systems, the caches can be stored separately for each of them. This can be used in combination with hashfiles in case multiple caches are being generated per OS.
```yaml
- uses: actions/cache@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-cache
```
### Creating a short lived cache
Caches scoped to the particular workflow run id or run attempt can be stored and referred by using the run id/attempt. This is an effective way to have a short lived cache.
```yaml
key: cache-${{ github.run_id }}-${{ github.run_attempt }}
```
On similar lines, commit sha can be used to create a very specialized and short lived cache.
```yaml
- uses: actions/cache@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: cache-${{ github.sha }}
```
### Using multiple factors while forming a key depening on the need
Cache key can be formed by combination of more than one metadata, evaluated info.
```yaml
- uses: actions/cache@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
```
The [GitHub Context](https://docs.github.com/en/actions/learn-github-actions/contexts#github-context) can be used to create keys using the workflows metadata.
## Restoring Cache
### Understanding how to choose path
While setting paths for caching dependencies it is important to give correct path depending on the hosted runner you are using or whether the action is running in a container job. Assigning different `path` for save and restore will result in cache miss.
Below are GiHub hosted runner specific paths one should take care of when writing a workflow which saves/restores caches across OS.
#### Ubuntu Paths
Home directory (`~/`) = `/home/runner`
`${{ github.workspace }}` = `/home/runner/work/repo/repo`
`process.env['RUNNER_TEMP']`=`/home/runner/work/_temp`
`process.cwd()` = `/home/runner/work/repo/repo`
#### Windows Paths
Home directory (`~/`) = `C:\Users\runneradmin`
`${{ github.workspace }}` = `D:\a\repo\repo`
`process.env['RUNNER_TEMP']` = `D:\a\_temp`
`process.cwd()` = `D:\a\repo\repo`
#### MacOS Paths
Home directory (`~/`) = `/Users/runner`
`${{ github.workspace }}` = `/Users/runner/work/repo/repo`
`process.env['RUNNER_TEMP']` = `/Users/runner/work/_temp`
`process.cwd()` = `/Users/runner/work/repo/repo`
Where:
`cwd()` = Current working directory where the repository code resides.
`RUNNER_TEMP` = Environment variable defined for temporary storage location.
### Make cache read only / Reuse cache from centralized job
In case you are using a centralized job to create and save your cache that can be reused by other jobs in your repository, this action will take care of your restore only needs and make the cache read-only.
```yaml
steps:
- uses: actions/checkout@v3
- uses: actions/cache/restore@v3
id: cache
with:
path: path/to/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
- name: Install Dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: /install.sh
- name: Build
run: /build.sh
- name: Publish package to public
run: /publish.sh
```
### Failing/Exiting the workflow if cache with exact key is not found
You can use the output of this action to exit the workflow on cache miss. This way you can restrict your workflow to only initiate the build when `cache-hit` occurs, in other words, cache with exact key is found.
```yaml
steps:
- uses: actions/checkout@v3
- uses: actions/cache/restore@v3
id: cache
with:
path: path/to/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
- name: Check cache hit
if: steps.cache.outputs.cache-hit != 'true'
run: exit 1
- name: Build
run: /build.sh
```
## Saving cache
### Reusing primary key from restore cache as input to save action
If you want to avoid re-computing the cache key again in `save` action, the outputs from `restore` action can be used as input to the `restore` action.
```yaml
- uses: actions/cache/restore@v3
id: restore-cache
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
.
.
.
- uses: actions/cache/save@v3
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ steps.restore-cache.outputs.key }}
```
### Re-evaluate cache key while saving cache
On the other hand, the key can also be explicitly re-computed while executing the save action. This helps in cases where the lockfiles are generated during the build.
Let's say we have a restore step that computes key at runtime
```yaml
uses: actions/cache/restore@v3
id: restore-cache
with:
key: cache-${{ hashFiles('**/lockfiles') }}
```
Case 1: Where an user would want to reuse the key as it is
```yaml
uses: actions/cache/save@v3
with:
key: ${{ steps.restore-cache.outputs.key }}
```
Case 2: Where the user would want to re-evaluate the key
```yaml
uses: actions/cache/save@v3
with:
key: npm-cache-${{hashfiles(package-lock.json)}}
```
### Saving cache even if the build fails
There can be cases where a cache should be saved even if the build job fails. For example, a job can fail due to flaky tests but the caches can still be re-used. You can use `actions/cache/save` action to save the cache by using `if: always()` condition.
Similarly, `actions/cache/save` action can be conditionally used based on the output of the previous steps. This way you get more control on when to save the cache.
```yaml
steps:
- uses: actions/checkout@v3
.
. // restore if need be
.
- name: Build
run: /build.sh
- uses: actions/cache/save@v3
if: always() // or any other condition to invoke the save action
with:
path: path/to/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
```
### Saving cache once and reusing in multiple workflows
In case of multi-module projects, where the built artifact of one project needs to be reused in subsequent child modules, the need of rebuilding the parent module again and again with every build can be eliminated. The `actions/cache` or `actions/cache/save` action can be used to build and save the parent module artifact once, and restored multiple times while building the child modules.
#### Step 1 - Build the parent module and save it
```yaml
steps:
- uses: actions/checkout@v3
- name: Build
run: ./build-parent-module.sh
- uses: actions/cache/save@v3
id: cache
with:
path: path/to/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
```
#### Step 2 - Restore the built artifact from cache using the same key and path
```yaml
steps:
- uses: actions/checkout@v3
- uses: actions/cache/restore@v3
id: cache
with:
path: path/to/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
- name: Install Dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: ./install.sh
- name: Build
run: ./build-child-module.sh
- name: Publish package to public
run: ./publish.sh
```

View File

@ -1177,10 +1177,6 @@ function getVersion(app) {
// Use zstandard if possible to maximize cache performance
function getCompressionMethod() {
return __awaiter(this, void 0, void 0, function* () {
if (process.platform === 'win32' && !(yield isGnuTarInstalled())) {
// Disable zstd due to bug https://github.com/actions/cache/issues/301
return constants_1.CompressionMethod.Gzip;
}
const versionOutput = yield getVersion('zstd');
const version = semver.clean(versionOutput);
if (!versionOutput.toLowerCase().includes('zstd command line interface')) {
@ -1204,13 +1200,16 @@ function getCacheFileName(compressionMethod) {
: constants_1.CacheFilename.Zstd;
}
exports.getCacheFileName = getCacheFileName;
function isGnuTarInstalled() {
function getGnuTarPathOnWindows() {
return __awaiter(this, void 0, void 0, function* () {
if (fs.existsSync(constants_1.GnuTarPathOnWindows)) {
return constants_1.GnuTarPathOnWindows;
}
const versionOutput = yield getVersion('tar');
return versionOutput.toLowerCase().includes('gnu tar');
return versionOutput.toLowerCase().includes('gnu tar') ? io.which('tar') : '';
});
}
exports.isGnuTarInstalled = isGnuTarInstalled;
exports.getGnuTarPathOnWindows = getGnuTarPathOnWindows;
function assertDefined(name, value) {
if (value === undefined) {
throw Error(`Expected ${name} but value was undefiend`);
@ -3384,7 +3383,6 @@ const crypto = __importStar(__webpack_require__(417));
const fs = __importStar(__webpack_require__(747));
const url_1 = __webpack_require__(414);
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const downloadUtils_1 = __webpack_require__(251);
const options_1 = __webpack_require__(538);
const requestUtils_1 = __webpack_require__(899);
@ -3414,10 +3412,17 @@ function createHttpClient() {
const bearerCredentialHandler = new auth_1.BearerCredentialHandler(token);
return new http_client_1.HttpClient('actions/cache', [bearerCredentialHandler], getRequestOptions());
}
function getCacheVersion(paths, compressionMethod) {
const components = paths.concat(!compressionMethod || compressionMethod === constants_1.CompressionMethod.Gzip
? []
: [compressionMethod]);
function getCacheVersion(paths, compressionMethod, enableCrossOsArchive = false) {
const components = paths;
// Add compression method to cache version to restore
// compressed cache as per compression method
if (compressionMethod) {
components.push(compressionMethod);
}
// Only check for windows platforms if enableCrossOsArchive is false
if (process.platform === 'win32' && !enableCrossOsArchive) {
components.push('windows-only');
}
// Add salt to cache version to support breaking changes in cache entry
components.push(versionSalt);
return crypto
@ -3429,9 +3434,10 @@ exports.getCacheVersion = getCacheVersion;
function getCacheEntry(keys, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const resource = `cache?keys=${encodeURIComponent(keys.join(','))}&version=${version}`;
const response = yield requestUtils_1.retryTypedResponse('getCacheEntry', () => __awaiter(this, void 0, void 0, function* () { return httpClient.getJson(getCacheApiUrl(resource)); }));
// Cache not found
if (response.statusCode === 204) {
// List cache for primary key only if cache miss occurs
if (core.isDebug()) {
@ -3445,6 +3451,7 @@ function getCacheEntry(keys, paths, options) {
const cacheResult = response.result;
const cacheDownloadUrl = cacheResult === null || cacheResult === void 0 ? void 0 : cacheResult.archiveLocation;
if (!cacheDownloadUrl) {
// Cache achiveLocation not found. This should never happen, and hence bail out.
throw new Error('Cache not found.');
}
core.setSecret(cacheDownloadUrl);
@ -3490,7 +3497,7 @@ exports.downloadCache = downloadCache;
function reserveCache(key, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const reserveCacheRequest = {
key,
version,
@ -4970,7 +4977,8 @@ var Inputs;
Inputs["Key"] = "key";
Inputs["Path"] = "path";
Inputs["RestoreKeys"] = "restore-keys";
Inputs["UploadChunkSize"] = "upload-chunk-size"; // Input for cache, save action
Inputs["UploadChunkSize"] = "upload-chunk-size";
Inputs["EnableCrossOsArchive"] = "enableCrossOsArchive"; // Input for cache, restore, save action
})(Inputs = exports.Inputs || (exports.Inputs = {}));
var Outputs;
(function (Outputs) {
@ -10066,7 +10074,7 @@ var __importStar = (this && this.__importStar) || function (mod) {
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.isCacheFeatureAvailable = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
exports.isCacheFeatureAvailable = exports.getInputAsBool = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
const cache = __importStar(__webpack_require__(692));
const core = __importStar(__webpack_require__(470));
const constants_1 = __webpack_require__(196);
@ -10109,6 +10117,11 @@ function getInputAsInt(name, options) {
return value;
}
exports.getInputAsInt = getInputAsInt;
function getInputAsBool(name, options) {
const result = core.getInput(name, options);
return result.toLowerCase() === "true";
}
exports.getInputAsBool = getInputAsBool;
function isCacheFeatureAvailable() {
if (cache.isFeatureAvailable()) {
return true;
@ -38216,27 +38229,27 @@ var __importStar = (this && this.__importStar) || function (mod) {
};
Object.defineProperty(exports, "__esModule", { value: true });
const exec_1 = __webpack_require__(986);
const core_1 = __webpack_require__(470);
const io = __importStar(__webpack_require__(1));
const fs_1 = __webpack_require__(747);
const path = __importStar(__webpack_require__(622));
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const IS_WINDOWS = process.platform === 'win32';
function getTarPath(args, compressionMethod) {
core_1.exportVariable('MSYS', 'winsymlinks:nativestrict');
// Returns tar path and type: BSD or GNU
function getTarPath() {
return __awaiter(this, void 0, void 0, function* () {
switch (process.platform) {
case 'win32': {
const systemTar = `${process.env['windir']}\\System32\\tar.exe`;
if (compressionMethod !== constants_1.CompressionMethod.Gzip) {
// We only use zstandard compression on windows when gnu tar is installed due to
// a bug with compressing large files with bsdtar + zstd
args.push('--force-local');
const gnuTar = yield utils.getGnuTarPathOnWindows();
const systemTar = constants_1.SystemTarPathOnWindows;
if (gnuTar) {
// Use GNUtar as default on windows
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else if (fs_1.existsSync(systemTar)) {
return systemTar;
}
else if (yield utils.isGnuTarInstalled()) {
args.push('--force-local');
return { path: systemTar, type: constants_1.ArchiveToolType.BSD };
}
break;
}
@ -38244,25 +38257,92 @@ function getTarPath(args, compressionMethod) {
const gnuTar = yield io.which('gtar', false);
if (gnuTar) {
// fix permission denied errors when extracting BSD tar archive with GNU tar - https://github.com/actions/cache/issues/527
args.push('--delay-directory-restore');
return gnuTar;
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else {
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.BSD
};
}
break;
}
default:
break;
}
return yield io.which('tar', true);
// Default assumption is GNU tar is present in path
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.GNU
};
});
}
function execTar(args, compressionMethod, cwd) {
// Return arguments for tar as per tarPath, compressionMethod, method type and os
function getTarArgs(tarPath, compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
try {
yield exec_1.exec(`"${yield getTarPath(args, compressionMethod)}"`, args, { cwd });
const args = [`"${tarPath.path}"`];
const cacheFileName = utils.getCacheFileName(compressionMethod);
const tarFile = 'cache.tar';
const workingDirectory = getWorkingDirectory();
// Speficic args for BSD tar on windows for workaround
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
// Method specific args
switch (type) {
case 'create':
args.push('--posix', '-cf', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--exclude', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--files-from', constants_1.ManifestFilename);
break;
case 'extract':
args.push('-xf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'));
break;
case 'list':
args.push('-tf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P');
break;
}
catch (error) {
throw new Error(`Tar failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
// Platform specific args
if (tarPath.type === constants_1.ArchiveToolType.GNU) {
switch (process.platform) {
case 'win32':
args.push('--force-local');
break;
case 'darwin':
args.push('--delay-directory-restore');
break;
}
}
return args;
});
}
// Returns commands to run tar and compression program
function getCommands(compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
let args;
const tarPath = yield getTarPath();
const tarArgs = yield getTarArgs(tarPath, compressionMethod, type, archivePath);
const compressionArgs = type !== 'create'
? yield getDecompressionProgram(tarPath, compressionMethod, archivePath)
: yield getCompressionProgram(tarPath, compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
if (BSD_TAR_ZSTD && type !== 'create') {
args = [[...compressionArgs].join(' '), [...tarArgs].join(' ')];
}
else {
args = [[...tarArgs].join(' '), [...compressionArgs].join(' ')];
}
if (BSD_TAR_ZSTD) {
return args;
}
return [args.join(' ')];
});
}
function getWorkingDirectory() {
@ -38270,91 +38350,116 @@ function getWorkingDirectory() {
return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd();
}
// Common function for extractTar and listTar to get the compression method
function getCompressionProgram(compressionMethod) {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd'];
default:
return ['-z'];
}
function getDecompressionProgram(tarPath, compressionMethod, archivePath) {
return __awaiter(this, void 0, void 0, function* () {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -d --long=30 --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -d --long=30"' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -d --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -d"' : 'unzstd'];
default:
return ['-z'];
}
});
}
// Used for creating the archive
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram(tarPath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const cacheFileName = utils.getCacheFileName(compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -T0 --long=30 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -T0 --long=30"' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -T0 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -T0"' : 'zstdmt'];
default:
return ['-z'];
}
});
}
// Executes all commands as separate processes
function execCommands(commands, cwd) {
return __awaiter(this, void 0, void 0, function* () {
for (const command of commands) {
try {
yield exec_1.exec(command, undefined, { cwd });
}
catch (error) {
throw new Error(`${command.split(' ')[0]} failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
}
}
});
}
// List the contents of a tar
function listTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const args = [
...getCompressionProgram(compressionMethod),
'-tf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P'
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'list', archivePath);
yield execCommands(commands);
});
}
exports.listTar = listTar;
// Extract a tar
function extractTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Create directory to extract tar into
const workingDirectory = getWorkingDirectory();
yield io.mkdirP(workingDirectory);
const args = [
...getCompressionProgram(compressionMethod),
'-xf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'extract', archivePath);
yield execCommands(commands);
});
}
exports.extractTar = extractTar;
// Create a tar
function createTar(archiveFolder, sourceDirectories, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Write source directories to manifest.txt to avoid command length limits
const manifestFilename = 'manifest.txt';
const cacheFileName = utils.getCacheFileName(compressionMethod);
fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n'));
const workingDirectory = getWorkingDirectory();
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram() {
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt'];
default:
return ['-z'];
}
}
const args = [
'--posix',
...getCompressionProgram(),
'-cf',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--exclude',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--files-from',
manifestFilename
];
yield execTar(args, compressionMethod, archiveFolder);
fs_1.writeFileSync(path.join(archiveFolder, constants_1.ManifestFilename), sourceDirectories.join('\n'));
const commands = yield getCommands(compressionMethod, 'create');
yield execCommands(commands, archiveFolder);
});
}
exports.createTar = createTar;
@ -47150,9 +47255,10 @@ exports.isFeatureAvailable = isFeatureAvailable;
* @param primaryKey an explicit key for restoring the cache
* @param restoreKeys an optional ordered list of keys to use for restoring the cache if no cache hit occurred for key
* @param downloadOptions cache download options
* @param enableCrossOsArchive an optional boolean enabled to restore on windows any cache created on any platform
* @returns string returns the key for the cache hit, otherwise returns undefined
*/
function restoreCache(paths, primaryKey, restoreKeys, options) {
function restoreCache(paths, primaryKey, restoreKeys, options, enableCrossOsArchive = false) {
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
restoreKeys = restoreKeys || [];
@ -47170,7 +47276,8 @@ function restoreCache(paths, primaryKey, restoreKeys, options) {
try {
// path are needed to compute version
const cacheEntry = yield cacheHttpClient.getCacheEntry(keys, paths, {
compressionMethod
compressionMethod,
enableCrossOsArchive
});
if (!(cacheEntry === null || cacheEntry === void 0 ? void 0 : cacheEntry.archiveLocation)) {
// Cache not found
@ -47217,10 +47324,11 @@ exports.restoreCache = restoreCache;
*
* @param paths a list of file paths to be cached
* @param key an explicit key for restoring the cache
* @param enableCrossOsArchive an optional boolean enabled to save cache on windows which could be restored on any platform
* @param options cache upload options
* @returns number returns cacheId if the cache was saved successfully and throws an error if save fails
*/
function saveCache(paths, key, options) {
function saveCache(paths, key, options, enableCrossOsArchive = false) {
var _a, _b, _c, _d, _e;
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
@ -47251,6 +47359,7 @@ function saveCache(paths, key, options) {
core.debug('Reserving Cache');
const reserveCacheResponse = yield cacheHttpClient.reserveCache(key, paths, {
compressionMethod,
enableCrossOsArchive,
cacheSize: archiveFileSize
});
if ((_a = reserveCacheResponse === null || reserveCacheResponse === void 0 ? void 0 : reserveCacheResponse.result) === null || _a === void 0 ? void 0 : _a.cacheId) {
@ -50385,7 +50494,8 @@ function restoreImpl(stateProvider) {
const cachePaths = utils.getInputAsArray(constants_1.Inputs.Path, {
required: true
});
const cacheKey = yield cache.restoreCache(cachePaths, primaryKey, restoreKeys);
const enableCrossOsArchive = utils.getInputAsBool(constants_1.Inputs.EnableCrossOsArchive);
const cacheKey = yield cache.restoreCache(cachePaths, primaryKey, restoreKeys, {}, enableCrossOsArchive);
if (!cacheKey) {
core.info(`Cache not found for input keys: ${[
primaryKey,
@ -53255,6 +53365,11 @@ var CompressionMethod;
CompressionMethod["ZstdWithoutLong"] = "zstd-without-long";
CompressionMethod["Zstd"] = "zstd";
})(CompressionMethod = exports.CompressionMethod || (exports.CompressionMethod = {}));
var ArchiveToolType;
(function (ArchiveToolType) {
ArchiveToolType["GNU"] = "gnu";
ArchiveToolType["BSD"] = "bsd";
})(ArchiveToolType = exports.ArchiveToolType || (exports.ArchiveToolType = {}));
// The default number of retry attempts.
exports.DefaultRetryAttempts = 2;
// The default delay in milliseconds between retry attempts.
@ -53263,6 +53378,12 @@ exports.DefaultRetryDelay = 5000;
// over the socket during this period, the socket is destroyed and the download
// is aborted.
exports.SocketTimeout = 5000;
// The default path of GNUtar on hosted Windows runners
exports.GnuTarPathOnWindows = `${process.env['PROGRAMFILES']}\\Git\\usr\\bin\\tar.exe`;
// The default path of BSDtar on hosted Windows runners
exports.SystemTarPathOnWindows = `${process.env['SYSTEMDRIVE']}\\Windows\\System32\\tar.exe`;
exports.TarFilename = 'cache.tar';
exports.ManifestFilename = 'manifest.txt';
//# sourceMappingURL=constants.js.map
/***/ }),

335
dist/restore/index.js vendored
View File

@ -1177,10 +1177,6 @@ function getVersion(app) {
// Use zstandard if possible to maximize cache performance
function getCompressionMethod() {
return __awaiter(this, void 0, void 0, function* () {
if (process.platform === 'win32' && !(yield isGnuTarInstalled())) {
// Disable zstd due to bug https://github.com/actions/cache/issues/301
return constants_1.CompressionMethod.Gzip;
}
const versionOutput = yield getVersion('zstd');
const version = semver.clean(versionOutput);
if (!versionOutput.toLowerCase().includes('zstd command line interface')) {
@ -1204,13 +1200,16 @@ function getCacheFileName(compressionMethod) {
: constants_1.CacheFilename.Zstd;
}
exports.getCacheFileName = getCacheFileName;
function isGnuTarInstalled() {
function getGnuTarPathOnWindows() {
return __awaiter(this, void 0, void 0, function* () {
if (fs.existsSync(constants_1.GnuTarPathOnWindows)) {
return constants_1.GnuTarPathOnWindows;
}
const versionOutput = yield getVersion('tar');
return versionOutput.toLowerCase().includes('gnu tar');
return versionOutput.toLowerCase().includes('gnu tar') ? io.which('tar') : '';
});
}
exports.isGnuTarInstalled = isGnuTarInstalled;
exports.getGnuTarPathOnWindows = getGnuTarPathOnWindows;
function assertDefined(name, value) {
if (value === undefined) {
throw Error(`Expected ${name} but value was undefiend`);
@ -3384,7 +3383,6 @@ const crypto = __importStar(__webpack_require__(417));
const fs = __importStar(__webpack_require__(747));
const url_1 = __webpack_require__(414);
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const downloadUtils_1 = __webpack_require__(251);
const options_1 = __webpack_require__(538);
const requestUtils_1 = __webpack_require__(899);
@ -3414,10 +3412,17 @@ function createHttpClient() {
const bearerCredentialHandler = new auth_1.BearerCredentialHandler(token);
return new http_client_1.HttpClient('actions/cache', [bearerCredentialHandler], getRequestOptions());
}
function getCacheVersion(paths, compressionMethod) {
const components = paths.concat(!compressionMethod || compressionMethod === constants_1.CompressionMethod.Gzip
? []
: [compressionMethod]);
function getCacheVersion(paths, compressionMethod, enableCrossOsArchive = false) {
const components = paths;
// Add compression method to cache version to restore
// compressed cache as per compression method
if (compressionMethod) {
components.push(compressionMethod);
}
// Only check for windows platforms if enableCrossOsArchive is false
if (process.platform === 'win32' && !enableCrossOsArchive) {
components.push('windows-only');
}
// Add salt to cache version to support breaking changes in cache entry
components.push(versionSalt);
return crypto
@ -3429,9 +3434,10 @@ exports.getCacheVersion = getCacheVersion;
function getCacheEntry(keys, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const resource = `cache?keys=${encodeURIComponent(keys.join(','))}&version=${version}`;
const response = yield requestUtils_1.retryTypedResponse('getCacheEntry', () => __awaiter(this, void 0, void 0, function* () { return httpClient.getJson(getCacheApiUrl(resource)); }));
// Cache not found
if (response.statusCode === 204) {
// List cache for primary key only if cache miss occurs
if (core.isDebug()) {
@ -3445,6 +3451,7 @@ function getCacheEntry(keys, paths, options) {
const cacheResult = response.result;
const cacheDownloadUrl = cacheResult === null || cacheResult === void 0 ? void 0 : cacheResult.archiveLocation;
if (!cacheDownloadUrl) {
// Cache achiveLocation not found. This should never happen, and hence bail out.
throw new Error('Cache not found.');
}
core.setSecret(cacheDownloadUrl);
@ -3490,7 +3497,7 @@ exports.downloadCache = downloadCache;
function reserveCache(key, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const reserveCacheRequest = {
key,
version,
@ -4970,7 +4977,8 @@ var Inputs;
Inputs["Key"] = "key";
Inputs["Path"] = "path";
Inputs["RestoreKeys"] = "restore-keys";
Inputs["UploadChunkSize"] = "upload-chunk-size"; // Input for cache, save action
Inputs["UploadChunkSize"] = "upload-chunk-size";
Inputs["EnableCrossOsArchive"] = "enableCrossOsArchive"; // Input for cache, restore, save action
})(Inputs = exports.Inputs || (exports.Inputs = {}));
var Outputs;
(function (Outputs) {
@ -38129,27 +38137,27 @@ var __importStar = (this && this.__importStar) || function (mod) {
};
Object.defineProperty(exports, "__esModule", { value: true });
const exec_1 = __webpack_require__(986);
const core_1 = __webpack_require__(470);
const io = __importStar(__webpack_require__(1));
const fs_1 = __webpack_require__(747);
const path = __importStar(__webpack_require__(622));
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const IS_WINDOWS = process.platform === 'win32';
function getTarPath(args, compressionMethod) {
core_1.exportVariable('MSYS', 'winsymlinks:nativestrict');
// Returns tar path and type: BSD or GNU
function getTarPath() {
return __awaiter(this, void 0, void 0, function* () {
switch (process.platform) {
case 'win32': {
const systemTar = `${process.env['windir']}\\System32\\tar.exe`;
if (compressionMethod !== constants_1.CompressionMethod.Gzip) {
// We only use zstandard compression on windows when gnu tar is installed due to
// a bug with compressing large files with bsdtar + zstd
args.push('--force-local');
const gnuTar = yield utils.getGnuTarPathOnWindows();
const systemTar = constants_1.SystemTarPathOnWindows;
if (gnuTar) {
// Use GNUtar as default on windows
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else if (fs_1.existsSync(systemTar)) {
return systemTar;
}
else if (yield utils.isGnuTarInstalled()) {
args.push('--force-local');
return { path: systemTar, type: constants_1.ArchiveToolType.BSD };
}
break;
}
@ -38157,25 +38165,92 @@ function getTarPath(args, compressionMethod) {
const gnuTar = yield io.which('gtar', false);
if (gnuTar) {
// fix permission denied errors when extracting BSD tar archive with GNU tar - https://github.com/actions/cache/issues/527
args.push('--delay-directory-restore');
return gnuTar;
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else {
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.BSD
};
}
break;
}
default:
break;
}
return yield io.which('tar', true);
// Default assumption is GNU tar is present in path
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.GNU
};
});
}
function execTar(args, compressionMethod, cwd) {
// Return arguments for tar as per tarPath, compressionMethod, method type and os
function getTarArgs(tarPath, compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
try {
yield exec_1.exec(`"${yield getTarPath(args, compressionMethod)}"`, args, { cwd });
const args = [`"${tarPath.path}"`];
const cacheFileName = utils.getCacheFileName(compressionMethod);
const tarFile = 'cache.tar';
const workingDirectory = getWorkingDirectory();
// Speficic args for BSD tar on windows for workaround
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
// Method specific args
switch (type) {
case 'create':
args.push('--posix', '-cf', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--exclude', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--files-from', constants_1.ManifestFilename);
break;
case 'extract':
args.push('-xf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'));
break;
case 'list':
args.push('-tf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P');
break;
}
catch (error) {
throw new Error(`Tar failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
// Platform specific args
if (tarPath.type === constants_1.ArchiveToolType.GNU) {
switch (process.platform) {
case 'win32':
args.push('--force-local');
break;
case 'darwin':
args.push('--delay-directory-restore');
break;
}
}
return args;
});
}
// Returns commands to run tar and compression program
function getCommands(compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
let args;
const tarPath = yield getTarPath();
const tarArgs = yield getTarArgs(tarPath, compressionMethod, type, archivePath);
const compressionArgs = type !== 'create'
? yield getDecompressionProgram(tarPath, compressionMethod, archivePath)
: yield getCompressionProgram(tarPath, compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
if (BSD_TAR_ZSTD && type !== 'create') {
args = [[...compressionArgs].join(' '), [...tarArgs].join(' ')];
}
else {
args = [[...tarArgs].join(' '), [...compressionArgs].join(' ')];
}
if (BSD_TAR_ZSTD) {
return args;
}
return [args.join(' ')];
});
}
function getWorkingDirectory() {
@ -38183,91 +38258,116 @@ function getWorkingDirectory() {
return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd();
}
// Common function for extractTar and listTar to get the compression method
function getCompressionProgram(compressionMethod) {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd'];
default:
return ['-z'];
}
function getDecompressionProgram(tarPath, compressionMethod, archivePath) {
return __awaiter(this, void 0, void 0, function* () {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -d --long=30 --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -d --long=30"' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -d --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -d"' : 'unzstd'];
default:
return ['-z'];
}
});
}
// Used for creating the archive
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram(tarPath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const cacheFileName = utils.getCacheFileName(compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -T0 --long=30 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -T0 --long=30"' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -T0 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -T0"' : 'zstdmt'];
default:
return ['-z'];
}
});
}
// Executes all commands as separate processes
function execCommands(commands, cwd) {
return __awaiter(this, void 0, void 0, function* () {
for (const command of commands) {
try {
yield exec_1.exec(command, undefined, { cwd });
}
catch (error) {
throw new Error(`${command.split(' ')[0]} failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
}
}
});
}
// List the contents of a tar
function listTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const args = [
...getCompressionProgram(compressionMethod),
'-tf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P'
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'list', archivePath);
yield execCommands(commands);
});
}
exports.listTar = listTar;
// Extract a tar
function extractTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Create directory to extract tar into
const workingDirectory = getWorkingDirectory();
yield io.mkdirP(workingDirectory);
const args = [
...getCompressionProgram(compressionMethod),
'-xf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'extract', archivePath);
yield execCommands(commands);
});
}
exports.extractTar = extractTar;
// Create a tar
function createTar(archiveFolder, sourceDirectories, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Write source directories to manifest.txt to avoid command length limits
const manifestFilename = 'manifest.txt';
const cacheFileName = utils.getCacheFileName(compressionMethod);
fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n'));
const workingDirectory = getWorkingDirectory();
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram() {
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt'];
default:
return ['-z'];
}
}
const args = [
'--posix',
...getCompressionProgram(),
'-cf',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--exclude',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--files-from',
manifestFilename
];
yield execTar(args, compressionMethod, archiveFolder);
fs_1.writeFileSync(path.join(archiveFolder, constants_1.ManifestFilename), sourceDirectories.join('\n'));
const commands = yield getCommands(compressionMethod, 'create');
yield execCommands(commands, archiveFolder);
});
}
exports.createTar = createTar;
@ -38502,7 +38602,7 @@ var __importStar = (this && this.__importStar) || function (mod) {
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.isCacheFeatureAvailable = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
exports.isCacheFeatureAvailable = exports.getInputAsBool = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
const cache = __importStar(__webpack_require__(692));
const core = __importStar(__webpack_require__(470));
const constants_1 = __webpack_require__(196);
@ -38545,6 +38645,11 @@ function getInputAsInt(name, options) {
return value;
}
exports.getInputAsInt = getInputAsInt;
function getInputAsBool(name, options) {
const result = core.getInput(name, options);
return result.toLowerCase() === "true";
}
exports.getInputAsBool = getInputAsBool;
function isCacheFeatureAvailable() {
if (cache.isFeatureAvailable()) {
return true;
@ -47121,9 +47226,10 @@ exports.isFeatureAvailable = isFeatureAvailable;
* @param primaryKey an explicit key for restoring the cache
* @param restoreKeys an optional ordered list of keys to use for restoring the cache if no cache hit occurred for key
* @param downloadOptions cache download options
* @param enableCrossOsArchive an optional boolean enabled to restore on windows any cache created on any platform
* @returns string returns the key for the cache hit, otherwise returns undefined
*/
function restoreCache(paths, primaryKey, restoreKeys, options) {
function restoreCache(paths, primaryKey, restoreKeys, options, enableCrossOsArchive = false) {
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
restoreKeys = restoreKeys || [];
@ -47141,7 +47247,8 @@ function restoreCache(paths, primaryKey, restoreKeys, options) {
try {
// path are needed to compute version
const cacheEntry = yield cacheHttpClient.getCacheEntry(keys, paths, {
compressionMethod
compressionMethod,
enableCrossOsArchive
});
if (!(cacheEntry === null || cacheEntry === void 0 ? void 0 : cacheEntry.archiveLocation)) {
// Cache not found
@ -47188,10 +47295,11 @@ exports.restoreCache = restoreCache;
*
* @param paths a list of file paths to be cached
* @param key an explicit key for restoring the cache
* @param enableCrossOsArchive an optional boolean enabled to save cache on windows which could be restored on any platform
* @param options cache upload options
* @returns number returns cacheId if the cache was saved successfully and throws an error if save fails
*/
function saveCache(paths, key, options) {
function saveCache(paths, key, options, enableCrossOsArchive = false) {
var _a, _b, _c, _d, _e;
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
@ -47222,6 +47330,7 @@ function saveCache(paths, key, options) {
core.debug('Reserving Cache');
const reserveCacheResponse = yield cacheHttpClient.reserveCache(key, paths, {
compressionMethod,
enableCrossOsArchive,
cacheSize: archiveFileSize
});
if ((_a = reserveCacheResponse === null || reserveCacheResponse === void 0 ? void 0 : reserveCacheResponse.result) === null || _a === void 0 ? void 0 : _a.cacheId) {
@ -50385,7 +50494,8 @@ function restoreImpl(stateProvider) {
const cachePaths = utils.getInputAsArray(constants_1.Inputs.Path, {
required: true
});
const cacheKey = yield cache.restoreCache(cachePaths, primaryKey, restoreKeys);
const enableCrossOsArchive = utils.getInputAsBool(constants_1.Inputs.EnableCrossOsArchive);
const cacheKey = yield cache.restoreCache(cachePaths, primaryKey, restoreKeys, {}, enableCrossOsArchive);
if (!cacheKey) {
core.info(`Cache not found for input keys: ${[
primaryKey,
@ -53255,6 +53365,11 @@ var CompressionMethod;
CompressionMethod["ZstdWithoutLong"] = "zstd-without-long";
CompressionMethod["Zstd"] = "zstd";
})(CompressionMethod = exports.CompressionMethod || (exports.CompressionMethod = {}));
var ArchiveToolType;
(function (ArchiveToolType) {
ArchiveToolType["GNU"] = "gnu";
ArchiveToolType["BSD"] = "bsd";
})(ArchiveToolType = exports.ArchiveToolType || (exports.ArchiveToolType = {}));
// The default number of retry attempts.
exports.DefaultRetryAttempts = 2;
// The default delay in milliseconds between retry attempts.
@ -53263,6 +53378,12 @@ exports.DefaultRetryDelay = 5000;
// over the socket during this period, the socket is destroyed and the download
// is aborted.
exports.SocketTimeout = 5000;
// The default path of GNUtar on hosted Windows runners
exports.GnuTarPathOnWindows = `${process.env['PROGRAMFILES']}\\Git\\usr\\bin\\tar.exe`;
// The default path of BSDtar on hosted Windows runners
exports.SystemTarPathOnWindows = `${process.env['SYSTEMDRIVE']}\\Windows\\System32\\tar.exe`;
exports.TarFilename = 'cache.tar';
exports.ManifestFilename = 'manifest.txt';
//# sourceMappingURL=constants.js.map
/***/ }),

View File

@ -1233,10 +1233,6 @@ function getVersion(app) {
// Use zstandard if possible to maximize cache performance
function getCompressionMethod() {
return __awaiter(this, void 0, void 0, function* () {
if (process.platform === 'win32' && !(yield isGnuTarInstalled())) {
// Disable zstd due to bug https://github.com/actions/cache/issues/301
return constants_1.CompressionMethod.Gzip;
}
const versionOutput = yield getVersion('zstd');
const version = semver.clean(versionOutput);
if (!versionOutput.toLowerCase().includes('zstd command line interface')) {
@ -1260,13 +1256,16 @@ function getCacheFileName(compressionMethod) {
: constants_1.CacheFilename.Zstd;
}
exports.getCacheFileName = getCacheFileName;
function isGnuTarInstalled() {
function getGnuTarPathOnWindows() {
return __awaiter(this, void 0, void 0, function* () {
if (fs.existsSync(constants_1.GnuTarPathOnWindows)) {
return constants_1.GnuTarPathOnWindows;
}
const versionOutput = yield getVersion('tar');
return versionOutput.toLowerCase().includes('gnu tar');
return versionOutput.toLowerCase().includes('gnu tar') ? io.which('tar') : '';
});
}
exports.isGnuTarInstalled = isGnuTarInstalled;
exports.getGnuTarPathOnWindows = getGnuTarPathOnWindows;
function assertDefined(name, value) {
if (value === undefined) {
throw Error(`Expected ${name} but value was undefiend`);
@ -3440,7 +3439,6 @@ const crypto = __importStar(__webpack_require__(417));
const fs = __importStar(__webpack_require__(747));
const url_1 = __webpack_require__(835);
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const downloadUtils_1 = __webpack_require__(251);
const options_1 = __webpack_require__(538);
const requestUtils_1 = __webpack_require__(899);
@ -3470,10 +3468,17 @@ function createHttpClient() {
const bearerCredentialHandler = new auth_1.BearerCredentialHandler(token);
return new http_client_1.HttpClient('actions/cache', [bearerCredentialHandler], getRequestOptions());
}
function getCacheVersion(paths, compressionMethod) {
const components = paths.concat(!compressionMethod || compressionMethod === constants_1.CompressionMethod.Gzip
? []
: [compressionMethod]);
function getCacheVersion(paths, compressionMethod, enableCrossOsArchive = false) {
const components = paths;
// Add compression method to cache version to restore
// compressed cache as per compression method
if (compressionMethod) {
components.push(compressionMethod);
}
// Only check for windows platforms if enableCrossOsArchive is false
if (process.platform === 'win32' && !enableCrossOsArchive) {
components.push('windows-only');
}
// Add salt to cache version to support breaking changes in cache entry
components.push(versionSalt);
return crypto
@ -3485,9 +3490,10 @@ exports.getCacheVersion = getCacheVersion;
function getCacheEntry(keys, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const resource = `cache?keys=${encodeURIComponent(keys.join(','))}&version=${version}`;
const response = yield requestUtils_1.retryTypedResponse('getCacheEntry', () => __awaiter(this, void 0, void 0, function* () { return httpClient.getJson(getCacheApiUrl(resource)); }));
// Cache not found
if (response.statusCode === 204) {
// List cache for primary key only if cache miss occurs
if (core.isDebug()) {
@ -3501,6 +3507,7 @@ function getCacheEntry(keys, paths, options) {
const cacheResult = response.result;
const cacheDownloadUrl = cacheResult === null || cacheResult === void 0 ? void 0 : cacheResult.archiveLocation;
if (!cacheDownloadUrl) {
// Cache achiveLocation not found. This should never happen, and hence bail out.
throw new Error('Cache not found.');
}
core.setSecret(cacheDownloadUrl);
@ -3546,7 +3553,7 @@ exports.downloadCache = downloadCache;
function reserveCache(key, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const reserveCacheRequest = {
key,
version,
@ -5026,7 +5033,8 @@ var Inputs;
Inputs["Key"] = "key";
Inputs["Path"] = "path";
Inputs["RestoreKeys"] = "restore-keys";
Inputs["UploadChunkSize"] = "upload-chunk-size"; // Input for cache, save action
Inputs["UploadChunkSize"] = "upload-chunk-size";
Inputs["EnableCrossOsArchive"] = "enableCrossOsArchive"; // Input for cache, restore, save action
})(Inputs = exports.Inputs || (exports.Inputs = {}));
var Outputs;
(function (Outputs) {
@ -38180,27 +38188,27 @@ var __importStar = (this && this.__importStar) || function (mod) {
};
Object.defineProperty(exports, "__esModule", { value: true });
const exec_1 = __webpack_require__(986);
const core_1 = __webpack_require__(470);
const io = __importStar(__webpack_require__(1));
const fs_1 = __webpack_require__(747);
const path = __importStar(__webpack_require__(622));
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const IS_WINDOWS = process.platform === 'win32';
function getTarPath(args, compressionMethod) {
core_1.exportVariable('MSYS', 'winsymlinks:nativestrict');
// Returns tar path and type: BSD or GNU
function getTarPath() {
return __awaiter(this, void 0, void 0, function* () {
switch (process.platform) {
case 'win32': {
const systemTar = `${process.env['windir']}\\System32\\tar.exe`;
if (compressionMethod !== constants_1.CompressionMethod.Gzip) {
// We only use zstandard compression on windows when gnu tar is installed due to
// a bug with compressing large files with bsdtar + zstd
args.push('--force-local');
const gnuTar = yield utils.getGnuTarPathOnWindows();
const systemTar = constants_1.SystemTarPathOnWindows;
if (gnuTar) {
// Use GNUtar as default on windows
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else if (fs_1.existsSync(systemTar)) {
return systemTar;
}
else if (yield utils.isGnuTarInstalled()) {
args.push('--force-local');
return { path: systemTar, type: constants_1.ArchiveToolType.BSD };
}
break;
}
@ -38208,25 +38216,92 @@ function getTarPath(args, compressionMethod) {
const gnuTar = yield io.which('gtar', false);
if (gnuTar) {
// fix permission denied errors when extracting BSD tar archive with GNU tar - https://github.com/actions/cache/issues/527
args.push('--delay-directory-restore');
return gnuTar;
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else {
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.BSD
};
}
break;
}
default:
break;
}
return yield io.which('tar', true);
// Default assumption is GNU tar is present in path
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.GNU
};
});
}
function execTar(args, compressionMethod, cwd) {
// Return arguments for tar as per tarPath, compressionMethod, method type and os
function getTarArgs(tarPath, compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
try {
yield exec_1.exec(`"${yield getTarPath(args, compressionMethod)}"`, args, { cwd });
const args = [`"${tarPath.path}"`];
const cacheFileName = utils.getCacheFileName(compressionMethod);
const tarFile = 'cache.tar';
const workingDirectory = getWorkingDirectory();
// Speficic args for BSD tar on windows for workaround
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
// Method specific args
switch (type) {
case 'create':
args.push('--posix', '-cf', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--exclude', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--files-from', constants_1.ManifestFilename);
break;
case 'extract':
args.push('-xf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'));
break;
case 'list':
args.push('-tf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P');
break;
}
catch (error) {
throw new Error(`Tar failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
// Platform specific args
if (tarPath.type === constants_1.ArchiveToolType.GNU) {
switch (process.platform) {
case 'win32':
args.push('--force-local');
break;
case 'darwin':
args.push('--delay-directory-restore');
break;
}
}
return args;
});
}
// Returns commands to run tar and compression program
function getCommands(compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
let args;
const tarPath = yield getTarPath();
const tarArgs = yield getTarArgs(tarPath, compressionMethod, type, archivePath);
const compressionArgs = type !== 'create'
? yield getDecompressionProgram(tarPath, compressionMethod, archivePath)
: yield getCompressionProgram(tarPath, compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
if (BSD_TAR_ZSTD && type !== 'create') {
args = [[...compressionArgs].join(' '), [...tarArgs].join(' ')];
}
else {
args = [[...tarArgs].join(' '), [...compressionArgs].join(' ')];
}
if (BSD_TAR_ZSTD) {
return args;
}
return [args.join(' ')];
});
}
function getWorkingDirectory() {
@ -38234,91 +38309,116 @@ function getWorkingDirectory() {
return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd();
}
// Common function for extractTar and listTar to get the compression method
function getCompressionProgram(compressionMethod) {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd'];
default:
return ['-z'];
}
function getDecompressionProgram(tarPath, compressionMethod, archivePath) {
return __awaiter(this, void 0, void 0, function* () {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -d --long=30 --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -d --long=30"' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -d --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -d"' : 'unzstd'];
default:
return ['-z'];
}
});
}
// Used for creating the archive
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram(tarPath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const cacheFileName = utils.getCacheFileName(compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -T0 --long=30 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -T0 --long=30"' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -T0 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -T0"' : 'zstdmt'];
default:
return ['-z'];
}
});
}
// Executes all commands as separate processes
function execCommands(commands, cwd) {
return __awaiter(this, void 0, void 0, function* () {
for (const command of commands) {
try {
yield exec_1.exec(command, undefined, { cwd });
}
catch (error) {
throw new Error(`${command.split(' ')[0]} failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
}
}
});
}
// List the contents of a tar
function listTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const args = [
...getCompressionProgram(compressionMethod),
'-tf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P'
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'list', archivePath);
yield execCommands(commands);
});
}
exports.listTar = listTar;
// Extract a tar
function extractTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Create directory to extract tar into
const workingDirectory = getWorkingDirectory();
yield io.mkdirP(workingDirectory);
const args = [
...getCompressionProgram(compressionMethod),
'-xf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'extract', archivePath);
yield execCommands(commands);
});
}
exports.extractTar = extractTar;
// Create a tar
function createTar(archiveFolder, sourceDirectories, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Write source directories to manifest.txt to avoid command length limits
const manifestFilename = 'manifest.txt';
const cacheFileName = utils.getCacheFileName(compressionMethod);
fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n'));
const workingDirectory = getWorkingDirectory();
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram() {
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt'];
default:
return ['-z'];
}
}
const args = [
'--posix',
...getCompressionProgram(),
'-cf',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--exclude',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--files-from',
manifestFilename
];
yield execTar(args, compressionMethod, archiveFolder);
fs_1.writeFileSync(path.join(archiveFolder, constants_1.ManifestFilename), sourceDirectories.join('\n'));
const commands = yield getCommands(compressionMethod, 'create');
yield execCommands(commands, archiveFolder);
});
}
exports.createTar = createTar;
@ -38553,7 +38653,7 @@ var __importStar = (this && this.__importStar) || function (mod) {
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.isCacheFeatureAvailable = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
exports.isCacheFeatureAvailable = exports.getInputAsBool = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
const cache = __importStar(__webpack_require__(692));
const core = __importStar(__webpack_require__(470));
const constants_1 = __webpack_require__(196);
@ -38596,6 +38696,11 @@ function getInputAsInt(name, options) {
return value;
}
exports.getInputAsInt = getInputAsInt;
function getInputAsBool(name, options) {
const result = core.getInput(name, options);
return result.toLowerCase() === "true";
}
exports.getInputAsBool = getInputAsBool;
function isCacheFeatureAvailable() {
if (cache.isFeatureAvailable()) {
return true;
@ -41075,9 +41180,8 @@ function saveImpl(stateProvider) {
const cachePaths = utils.getInputAsArray(constants_1.Inputs.Path, {
required: true
});
cacheId = yield cache.saveCache(cachePaths, primaryKey, {
uploadChunkSize: utils.getInputAsInt(constants_1.Inputs.UploadChunkSize)
});
const enableCrossOsArchive = utils.getInputAsBool(constants_1.Inputs.EnableCrossOsArchive);
cacheId = yield cache.saveCache(cachePaths, primaryKey, { uploadChunkSize: utils.getInputAsInt(constants_1.Inputs.UploadChunkSize) }, enableCrossOsArchive);
if (cacheId != -1) {
core.info(`Cache saved with key: ${primaryKey}`);
}
@ -47263,9 +47367,10 @@ exports.isFeatureAvailable = isFeatureAvailable;
* @param primaryKey an explicit key for restoring the cache
* @param restoreKeys an optional ordered list of keys to use for restoring the cache if no cache hit occurred for key
* @param downloadOptions cache download options
* @param enableCrossOsArchive an optional boolean enabled to restore on windows any cache created on any platform
* @returns string returns the key for the cache hit, otherwise returns undefined
*/
function restoreCache(paths, primaryKey, restoreKeys, options) {
function restoreCache(paths, primaryKey, restoreKeys, options, enableCrossOsArchive = false) {
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
restoreKeys = restoreKeys || [];
@ -47283,7 +47388,8 @@ function restoreCache(paths, primaryKey, restoreKeys, options) {
try {
// path are needed to compute version
const cacheEntry = yield cacheHttpClient.getCacheEntry(keys, paths, {
compressionMethod
compressionMethod,
enableCrossOsArchive
});
if (!(cacheEntry === null || cacheEntry === void 0 ? void 0 : cacheEntry.archiveLocation)) {
// Cache not found
@ -47330,10 +47436,11 @@ exports.restoreCache = restoreCache;
*
* @param paths a list of file paths to be cached
* @param key an explicit key for restoring the cache
* @param enableCrossOsArchive an optional boolean enabled to save cache on windows which could be restored on any platform
* @param options cache upload options
* @returns number returns cacheId if the cache was saved successfully and throws an error if save fails
*/
function saveCache(paths, key, options) {
function saveCache(paths, key, options, enableCrossOsArchive = false) {
var _a, _b, _c, _d, _e;
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
@ -47364,6 +47471,7 @@ function saveCache(paths, key, options) {
core.debug('Reserving Cache');
const reserveCacheResponse = yield cacheHttpClient.reserveCache(key, paths, {
compressionMethod,
enableCrossOsArchive,
cacheSize: archiveFileSize
});
if ((_a = reserveCacheResponse === null || reserveCacheResponse === void 0 ? void 0 : reserveCacheResponse.result) === null || _a === void 0 ? void 0 : _a.cacheId) {
@ -53290,6 +53398,11 @@ var CompressionMethod;
CompressionMethod["ZstdWithoutLong"] = "zstd-without-long";
CompressionMethod["Zstd"] = "zstd";
})(CompressionMethod = exports.CompressionMethod || (exports.CompressionMethod = {}));
var ArchiveToolType;
(function (ArchiveToolType) {
ArchiveToolType["GNU"] = "gnu";
ArchiveToolType["BSD"] = "bsd";
})(ArchiveToolType = exports.ArchiveToolType || (exports.ArchiveToolType = {}));
// The default number of retry attempts.
exports.DefaultRetryAttempts = 2;
// The default delay in milliseconds between retry attempts.
@ -53298,6 +53411,12 @@ exports.DefaultRetryDelay = 5000;
// over the socket during this period, the socket is destroyed and the download
// is aborted.
exports.SocketTimeout = 5000;
// The default path of GNUtar on hosted Windows runners
exports.GnuTarPathOnWindows = `${process.env['PROGRAMFILES']}\\Git\\usr\\bin\\tar.exe`;
// The default path of BSDtar on hosted Windows runners
exports.SystemTarPathOnWindows = `${process.env['SYSTEMDRIVE']}\\Windows\\System32\\tar.exe`;
exports.TarFilename = 'cache.tar';
exports.ManifestFilename = 'manifest.txt';
//# sourceMappingURL=constants.js.map
/***/ }),

337
dist/save/index.js vendored
View File

@ -1177,10 +1177,6 @@ function getVersion(app) {
// Use zstandard if possible to maximize cache performance
function getCompressionMethod() {
return __awaiter(this, void 0, void 0, function* () {
if (process.platform === 'win32' && !(yield isGnuTarInstalled())) {
// Disable zstd due to bug https://github.com/actions/cache/issues/301
return constants_1.CompressionMethod.Gzip;
}
const versionOutput = yield getVersion('zstd');
const version = semver.clean(versionOutput);
if (!versionOutput.toLowerCase().includes('zstd command line interface')) {
@ -1204,13 +1200,16 @@ function getCacheFileName(compressionMethod) {
: constants_1.CacheFilename.Zstd;
}
exports.getCacheFileName = getCacheFileName;
function isGnuTarInstalled() {
function getGnuTarPathOnWindows() {
return __awaiter(this, void 0, void 0, function* () {
if (fs.existsSync(constants_1.GnuTarPathOnWindows)) {
return constants_1.GnuTarPathOnWindows;
}
const versionOutput = yield getVersion('tar');
return versionOutput.toLowerCase().includes('gnu tar');
return versionOutput.toLowerCase().includes('gnu tar') ? io.which('tar') : '';
});
}
exports.isGnuTarInstalled = isGnuTarInstalled;
exports.getGnuTarPathOnWindows = getGnuTarPathOnWindows;
function assertDefined(name, value) {
if (value === undefined) {
throw Error(`Expected ${name} but value was undefiend`);
@ -3384,7 +3383,6 @@ const crypto = __importStar(__webpack_require__(417));
const fs = __importStar(__webpack_require__(747));
const url_1 = __webpack_require__(835);
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const downloadUtils_1 = __webpack_require__(251);
const options_1 = __webpack_require__(538);
const requestUtils_1 = __webpack_require__(899);
@ -3414,10 +3412,17 @@ function createHttpClient() {
const bearerCredentialHandler = new auth_1.BearerCredentialHandler(token);
return new http_client_1.HttpClient('actions/cache', [bearerCredentialHandler], getRequestOptions());
}
function getCacheVersion(paths, compressionMethod) {
const components = paths.concat(!compressionMethod || compressionMethod === constants_1.CompressionMethod.Gzip
? []
: [compressionMethod]);
function getCacheVersion(paths, compressionMethod, enableCrossOsArchive = false) {
const components = paths;
// Add compression method to cache version to restore
// compressed cache as per compression method
if (compressionMethod) {
components.push(compressionMethod);
}
// Only check for windows platforms if enableCrossOsArchive is false
if (process.platform === 'win32' && !enableCrossOsArchive) {
components.push('windows-only');
}
// Add salt to cache version to support breaking changes in cache entry
components.push(versionSalt);
return crypto
@ -3429,9 +3434,10 @@ exports.getCacheVersion = getCacheVersion;
function getCacheEntry(keys, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const resource = `cache?keys=${encodeURIComponent(keys.join(','))}&version=${version}`;
const response = yield requestUtils_1.retryTypedResponse('getCacheEntry', () => __awaiter(this, void 0, void 0, function* () { return httpClient.getJson(getCacheApiUrl(resource)); }));
// Cache not found
if (response.statusCode === 204) {
// List cache for primary key only if cache miss occurs
if (core.isDebug()) {
@ -3445,6 +3451,7 @@ function getCacheEntry(keys, paths, options) {
const cacheResult = response.result;
const cacheDownloadUrl = cacheResult === null || cacheResult === void 0 ? void 0 : cacheResult.archiveLocation;
if (!cacheDownloadUrl) {
// Cache achiveLocation not found. This should never happen, and hence bail out.
throw new Error('Cache not found.');
}
core.setSecret(cacheDownloadUrl);
@ -3490,7 +3497,7 @@ exports.downloadCache = downloadCache;
function reserveCache(key, paths, options) {
return __awaiter(this, void 0, void 0, function* () {
const httpClient = createHttpClient();
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod);
const version = getCacheVersion(paths, options === null || options === void 0 ? void 0 : options.compressionMethod, options === null || options === void 0 ? void 0 : options.enableCrossOsArchive);
const reserveCacheRequest = {
key,
version,
@ -4970,7 +4977,8 @@ var Inputs;
Inputs["Key"] = "key";
Inputs["Path"] = "path";
Inputs["RestoreKeys"] = "restore-keys";
Inputs["UploadChunkSize"] = "upload-chunk-size"; // Input for cache, save action
Inputs["UploadChunkSize"] = "upload-chunk-size";
Inputs["EnableCrossOsArchive"] = "enableCrossOsArchive"; // Input for cache, restore, save action
})(Inputs = exports.Inputs || (exports.Inputs = {}));
var Outputs;
(function (Outputs) {
@ -38124,27 +38132,27 @@ var __importStar = (this && this.__importStar) || function (mod) {
};
Object.defineProperty(exports, "__esModule", { value: true });
const exec_1 = __webpack_require__(986);
const core_1 = __webpack_require__(470);
const io = __importStar(__webpack_require__(1));
const fs_1 = __webpack_require__(747);
const path = __importStar(__webpack_require__(622));
const utils = __importStar(__webpack_require__(15));
const constants_1 = __webpack_require__(931);
const IS_WINDOWS = process.platform === 'win32';
function getTarPath(args, compressionMethod) {
core_1.exportVariable('MSYS', 'winsymlinks:nativestrict');
// Returns tar path and type: BSD or GNU
function getTarPath() {
return __awaiter(this, void 0, void 0, function* () {
switch (process.platform) {
case 'win32': {
const systemTar = `${process.env['windir']}\\System32\\tar.exe`;
if (compressionMethod !== constants_1.CompressionMethod.Gzip) {
// We only use zstandard compression on windows when gnu tar is installed due to
// a bug with compressing large files with bsdtar + zstd
args.push('--force-local');
const gnuTar = yield utils.getGnuTarPathOnWindows();
const systemTar = constants_1.SystemTarPathOnWindows;
if (gnuTar) {
// Use GNUtar as default on windows
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else if (fs_1.existsSync(systemTar)) {
return systemTar;
}
else if (yield utils.isGnuTarInstalled()) {
args.push('--force-local');
return { path: systemTar, type: constants_1.ArchiveToolType.BSD };
}
break;
}
@ -38152,25 +38160,92 @@ function getTarPath(args, compressionMethod) {
const gnuTar = yield io.which('gtar', false);
if (gnuTar) {
// fix permission denied errors when extracting BSD tar archive with GNU tar - https://github.com/actions/cache/issues/527
args.push('--delay-directory-restore');
return gnuTar;
return { path: gnuTar, type: constants_1.ArchiveToolType.GNU };
}
else {
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.BSD
};
}
break;
}
default:
break;
}
return yield io.which('tar', true);
// Default assumption is GNU tar is present in path
return {
path: yield io.which('tar', true),
type: constants_1.ArchiveToolType.GNU
};
});
}
function execTar(args, compressionMethod, cwd) {
// Return arguments for tar as per tarPath, compressionMethod, method type and os
function getTarArgs(tarPath, compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
try {
yield exec_1.exec(`"${yield getTarPath(args, compressionMethod)}"`, args, { cwd });
const args = [`"${tarPath.path}"`];
const cacheFileName = utils.getCacheFileName(compressionMethod);
const tarFile = 'cache.tar';
const workingDirectory = getWorkingDirectory();
// Speficic args for BSD tar on windows for workaround
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
// Method specific args
switch (type) {
case 'create':
args.push('--posix', '-cf', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--exclude', BSD_TAR_ZSTD
? tarFile
: cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '--files-from', constants_1.ManifestFilename);
break;
case 'extract':
args.push('-xf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P', '-C', workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'));
break;
case 'list':
args.push('-tf', BSD_TAR_ZSTD
? tarFile
: archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'), '-P');
break;
}
catch (error) {
throw new Error(`Tar failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
// Platform specific args
if (tarPath.type === constants_1.ArchiveToolType.GNU) {
switch (process.platform) {
case 'win32':
args.push('--force-local');
break;
case 'darwin':
args.push('--delay-directory-restore');
break;
}
}
return args;
});
}
// Returns commands to run tar and compression program
function getCommands(compressionMethod, type, archivePath = '') {
return __awaiter(this, void 0, void 0, function* () {
let args;
const tarPath = yield getTarPath();
const tarArgs = yield getTarArgs(tarPath, compressionMethod, type, archivePath);
const compressionArgs = type !== 'create'
? yield getDecompressionProgram(tarPath, compressionMethod, archivePath)
: yield getCompressionProgram(tarPath, compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
if (BSD_TAR_ZSTD && type !== 'create') {
args = [[...compressionArgs].join(' '), [...tarArgs].join(' ')];
}
else {
args = [[...tarArgs].join(' '), [...compressionArgs].join(' ')];
}
if (BSD_TAR_ZSTD) {
return args;
}
return [args.join(' ')];
});
}
function getWorkingDirectory() {
@ -38178,91 +38253,116 @@ function getWorkingDirectory() {
return (_a = process.env['GITHUB_WORKSPACE']) !== null && _a !== void 0 ? _a : process.cwd();
}
// Common function for extractTar and listTar to get the compression method
function getCompressionProgram(compressionMethod) {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -d --long=30' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -d' : 'unzstd'];
default:
return ['-z'];
}
function getDecompressionProgram(tarPath, compressionMethod, archivePath) {
return __awaiter(this, void 0, void 0, function* () {
// -d: Decompress.
// unzstd is equivalent to 'zstd -d'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -d --long=30 --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -d --long=30"' : 'unzstd --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -d --force -o',
constants_1.TarFilename,
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -d"' : 'unzstd'];
default:
return ['-z'];
}
});
}
// Used for creating the archive
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram(tarPath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const cacheFileName = utils.getCacheFileName(compressionMethod);
const BSD_TAR_ZSTD = tarPath.type === constants_1.ArchiveToolType.BSD &&
compressionMethod !== constants_1.CompressionMethod.Gzip &&
IS_WINDOWS;
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return BSD_TAR_ZSTD
? [
'zstd -T0 --long=30 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: [
'--use-compress-program',
IS_WINDOWS ? '"zstd -T0 --long=30"' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return BSD_TAR_ZSTD
? [
'zstd -T0 --force -o',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
constants_1.TarFilename
]
: ['--use-compress-program', IS_WINDOWS ? '"zstd -T0"' : 'zstdmt'];
default:
return ['-z'];
}
});
}
// Executes all commands as separate processes
function execCommands(commands, cwd) {
return __awaiter(this, void 0, void 0, function* () {
for (const command of commands) {
try {
yield exec_1.exec(command, undefined, { cwd });
}
catch (error) {
throw new Error(`${command.split(' ')[0]} failed with error: ${error === null || error === void 0 ? void 0 : error.message}`);
}
}
});
}
// List the contents of a tar
function listTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
const args = [
...getCompressionProgram(compressionMethod),
'-tf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P'
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'list', archivePath);
yield execCommands(commands);
});
}
exports.listTar = listTar;
// Extract a tar
function extractTar(archivePath, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Create directory to extract tar into
const workingDirectory = getWorkingDirectory();
yield io.mkdirP(workingDirectory);
const args = [
...getCompressionProgram(compressionMethod),
'-xf',
archivePath.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/')
];
yield execTar(args, compressionMethod);
const commands = yield getCommands(compressionMethod, 'extract', archivePath);
yield execCommands(commands);
});
}
exports.extractTar = extractTar;
// Create a tar
function createTar(archiveFolder, sourceDirectories, compressionMethod) {
return __awaiter(this, void 0, void 0, function* () {
// Write source directories to manifest.txt to avoid command length limits
const manifestFilename = 'manifest.txt';
const cacheFileName = utils.getCacheFileName(compressionMethod);
fs_1.writeFileSync(path.join(archiveFolder, manifestFilename), sourceDirectories.join('\n'));
const workingDirectory = getWorkingDirectory();
// -T#: Compress using # working thread. If # is 0, attempt to detect and use the number of physical CPU cores.
// zstdmt is equivalent to 'zstd -T0'
// --long=#: Enables long distance matching with # bits. Maximum is 30 (1GB) on 32-bit OS and 31 (2GB) on 64-bit.
// Using 30 here because we also support 32-bit self-hosted runners.
// Long range mode is added to zstd in v1.3.2 release, so we will not use --long in older version of zstd.
function getCompressionProgram() {
switch (compressionMethod) {
case constants_1.CompressionMethod.Zstd:
return [
'--use-compress-program',
IS_WINDOWS ? 'zstd -T0 --long=30' : 'zstdmt --long=30'
];
case constants_1.CompressionMethod.ZstdWithoutLong:
return ['--use-compress-program', IS_WINDOWS ? 'zstd -T0' : 'zstdmt'];
default:
return ['-z'];
}
}
const args = [
'--posix',
...getCompressionProgram(),
'-cf',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--exclude',
cacheFileName.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'-P',
'-C',
workingDirectory.replace(new RegExp(`\\${path.sep}`, 'g'), '/'),
'--files-from',
manifestFilename
];
yield execTar(args, compressionMethod, archiveFolder);
fs_1.writeFileSync(path.join(archiveFolder, constants_1.ManifestFilename), sourceDirectories.join('\n'));
const commands = yield getCommands(compressionMethod, 'create');
yield execCommands(commands, archiveFolder);
});
}
exports.createTar = createTar;
@ -38497,7 +38597,7 @@ var __importStar = (this && this.__importStar) || function (mod) {
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.isCacheFeatureAvailable = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
exports.isCacheFeatureAvailable = exports.getInputAsBool = exports.getInputAsInt = exports.getInputAsArray = exports.isValidEvent = exports.logWarning = exports.isExactKeyMatch = exports.isGhes = void 0;
const cache = __importStar(__webpack_require__(692));
const core = __importStar(__webpack_require__(470));
const constants_1 = __webpack_require__(196);
@ -38540,6 +38640,11 @@ function getInputAsInt(name, options) {
return value;
}
exports.getInputAsInt = getInputAsInt;
function getInputAsBool(name, options) {
const result = core.getInput(name, options);
return result.toLowerCase() === "true";
}
exports.getInputAsBool = getInputAsBool;
function isCacheFeatureAvailable() {
if (cache.isFeatureAvailable()) {
return true;
@ -41019,9 +41124,8 @@ function saveImpl(stateProvider) {
const cachePaths = utils.getInputAsArray(constants_1.Inputs.Path, {
required: true
});
cacheId = yield cache.saveCache(cachePaths, primaryKey, {
uploadChunkSize: utils.getInputAsInt(constants_1.Inputs.UploadChunkSize)
});
const enableCrossOsArchive = utils.getInputAsBool(constants_1.Inputs.EnableCrossOsArchive);
cacheId = yield cache.saveCache(cachePaths, primaryKey, { uploadChunkSize: utils.getInputAsInt(constants_1.Inputs.UploadChunkSize) }, enableCrossOsArchive);
if (cacheId != -1) {
core.info(`Cache saved with key: ${primaryKey}`);
}
@ -47236,9 +47340,10 @@ exports.isFeatureAvailable = isFeatureAvailable;
* @param primaryKey an explicit key for restoring the cache
* @param restoreKeys an optional ordered list of keys to use for restoring the cache if no cache hit occurred for key
* @param downloadOptions cache download options
* @param enableCrossOsArchive an optional boolean enabled to restore on windows any cache created on any platform
* @returns string returns the key for the cache hit, otherwise returns undefined
*/
function restoreCache(paths, primaryKey, restoreKeys, options) {
function restoreCache(paths, primaryKey, restoreKeys, options, enableCrossOsArchive = false) {
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
restoreKeys = restoreKeys || [];
@ -47256,7 +47361,8 @@ function restoreCache(paths, primaryKey, restoreKeys, options) {
try {
// path are needed to compute version
const cacheEntry = yield cacheHttpClient.getCacheEntry(keys, paths, {
compressionMethod
compressionMethod,
enableCrossOsArchive
});
if (!(cacheEntry === null || cacheEntry === void 0 ? void 0 : cacheEntry.archiveLocation)) {
// Cache not found
@ -47303,10 +47409,11 @@ exports.restoreCache = restoreCache;
*
* @param paths a list of file paths to be cached
* @param key an explicit key for restoring the cache
* @param enableCrossOsArchive an optional boolean enabled to save cache on windows which could be restored on any platform
* @param options cache upload options
* @returns number returns cacheId if the cache was saved successfully and throws an error if save fails
*/
function saveCache(paths, key, options) {
function saveCache(paths, key, options, enableCrossOsArchive = false) {
var _a, _b, _c, _d, _e;
return __awaiter(this, void 0, void 0, function* () {
checkPaths(paths);
@ -47337,6 +47444,7 @@ function saveCache(paths, key, options) {
core.debug('Reserving Cache');
const reserveCacheResponse = yield cacheHttpClient.reserveCache(key, paths, {
compressionMethod,
enableCrossOsArchive,
cacheSize: archiveFileSize
});
if ((_a = reserveCacheResponse === null || reserveCacheResponse === void 0 ? void 0 : reserveCacheResponse.result) === null || _a === void 0 ? void 0 : _a.cacheId) {
@ -53263,6 +53371,11 @@ var CompressionMethod;
CompressionMethod["ZstdWithoutLong"] = "zstd-without-long";
CompressionMethod["Zstd"] = "zstd";
})(CompressionMethod = exports.CompressionMethod || (exports.CompressionMethod = {}));
var ArchiveToolType;
(function (ArchiveToolType) {
ArchiveToolType["GNU"] = "gnu";
ArchiveToolType["BSD"] = "bsd";
})(ArchiveToolType = exports.ArchiveToolType || (exports.ArchiveToolType = {}));
// The default number of retry attempts.
exports.DefaultRetryAttempts = 2;
// The default delay in milliseconds between retry attempts.
@ -53271,6 +53384,12 @@ exports.DefaultRetryDelay = 5000;
// over the socket during this period, the socket is destroyed and the download
// is aborted.
exports.SocketTimeout = 5000;
// The default path of GNUtar on hosted Windows runners
exports.GnuTarPathOnWindows = `${process.env['PROGRAMFILES']}\\Git\\usr\\bin\\tar.exe`;
// The default path of BSDtar on hosted Windows runners
exports.SystemTarPathOnWindows = `${process.env['SYSTEMDRIVE']}\\Windows\\System32\\tar.exe`;
exports.TarFilename = 'cache.tar';
exports.ManifestFilename = 'manifest.txt';
//# sourceMappingURL=constants.js.map
/***/ }),

View File

@ -38,6 +38,7 @@
- [Swift, Objective-C - Carthage](#swift-objective-c---carthage)
- [Swift, Objective-C - CocoaPods](#swift-objective-c---cocoapods)
- [Swift - Swift Package Manager](#swift---swift-package-manager)
- [Swift - Mint](#swift---mint)
## C# - NuGet
@ -641,3 +642,18 @@ whenever possible:
restore-keys: |
${{ runner.os }}-spm-
```
## Swift - Mint
```yaml
env:
MINT_PATH: .mint/lib
MINT_LINK_PATH: .mint/bin
steps:
- uses: actions/cache@v3
with:
path: .mint
key: ${{ runner.os }}-mint-${{ hashFiles('**/Mintfile') }}
restore-keys: |
${{ runner.os }}-mint-
```

42
package-lock.json generated
View File

@ -1,15 +1,15 @@
{
"name": "cache",
"version": "3.2.2",
"version": "3.2.3",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "cache",
"version": "3.2.2",
"version": "3.2.3",
"license": "MIT",
"dependencies": {
"@actions/cache": "^3.1.1",
"@actions/cache": "^3.1.2",
"@actions/core": "^1.10.0",
"@actions/exec": "^1.1.1",
"@actions/io": "^1.1.2"
@ -36,9 +36,9 @@
}
},
"node_modules/@actions/cache": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.1.1.tgz",
"integrity": "sha512-gOUdNap8FvlpoQAMYWiNPi9Ltt7jKWv9RuUVKg9cp/vQA9qTXoKiBkTioUAgIejh/qf7jrojYn3lCyIRIsoSeQ==",
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.1.2.tgz",
"integrity": "sha512-3XeKcXIonfIbqvW7gPm/VLOhv1RHQ1dtTgSBCH6OUhCgSTii9bEVgu0PIms7UbLnXeMCKFzECfpbud8fJEvBbQ==",
"dependencies": {
"@actions/core": "^1.10.0",
"@actions/exec": "^1.0.1",
@ -8146,9 +8146,9 @@
"dev": true
},
"node_modules/json5": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
"integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz",
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==",
"dev": true,
"bin": {
"json5": "lib/cli.js"
@ -9406,9 +9406,9 @@
}
},
"node_modules/tsconfig-paths/node_modules/json5": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
"dev": true,
"dependencies": {
"minimist": "^1.2.0"
@ -9722,9 +9722,9 @@
},
"dependencies": {
"@actions/cache": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.1.1.tgz",
"integrity": "sha512-gOUdNap8FvlpoQAMYWiNPi9Ltt7jKWv9RuUVKg9cp/vQA9qTXoKiBkTioUAgIejh/qf7jrojYn3lCyIRIsoSeQ==",
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/@actions/cache/-/cache-3.1.2.tgz",
"integrity": "sha512-3XeKcXIonfIbqvW7gPm/VLOhv1RHQ1dtTgSBCH6OUhCgSTii9bEVgu0PIms7UbLnXeMCKFzECfpbud8fJEvBbQ==",
"requires": {
"@actions/core": "^1.10.0",
"@actions/exec": "^1.0.1",
@ -16062,9 +16062,9 @@
"dev": true
},
"json5": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
"integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz",
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==",
"dev": true
},
"kleur": {
@ -16967,9 +16967,9 @@
},
"dependencies": {
"json5": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
"dev": true,
"requires": {
"minimist": "^1.2.0"

View File

@ -1,6 +1,6 @@
{
"name": "cache",
"version": "3.2.2",
"version": "3.2.3",
"private": true,
"description": "Cache dependencies and build outputs",
"main": "dist/restore/index.js",
@ -23,7 +23,7 @@
"author": "GitHub",
"license": "MIT",
"dependencies": {
"@actions/cache": "^3.1.1",
"@actions/cache": "^3.1.2",
"@actions/core": "^1.10.0",
"@actions/exec": "^1.1.1",
"@actions/io": "^1.1.2"

View File

@ -11,6 +11,10 @@ inputs:
restore-keys:
description: 'An ordered list of keys to use for restoring stale cache if no cache hit occurred for key. Note `cache-hit` returns false in this case.'
required: false
enableCrossOsArchive:
description: 'An optional boolean when enabled, allows windows runners to restore caches that were saved on other platforms'
default: 'false'
required: false
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'

View File

@ -11,6 +11,10 @@ inputs:
upload-chunk-size:
description: 'The chunk size used to split up large files during upload, in bytes'
required: false
enableCrossOsArchive:
description: 'An optional boolean when enabled, allows windows runners to save caches that can be restored on other platforms'
default: 'false'
required: false
runs:
using: 'node16'
main: '../dist/save-only/index.js'

View File

@ -2,7 +2,8 @@ export enum Inputs {
Key = "key", // Input for cache, restore, save action
Path = "path", // Input for cache, restore, save action
RestoreKeys = "restore-keys", // Input for cache, restore action
UploadChunkSize = "upload-chunk-size" // Input for cache, save action
UploadChunkSize = "upload-chunk-size", // Input for cache, save action
EnableCrossOsArchive = "enableCrossOsArchive" // Input for cache, restore, save action
}
export enum Outputs {

View File

@ -31,11 +31,16 @@ async function restoreImpl(
const cachePaths = utils.getInputAsArray(Inputs.Path, {
required: true
});
const enableCrossOsArchive = utils.getInputAsBool(
Inputs.EnableCrossOsArchive
);
const cacheKey = await cache.restoreCache(
cachePaths,
primaryKey,
restoreKeys
restoreKeys,
{},
enableCrossOsArchive
);
if (!cacheKey) {

View File

@ -52,9 +52,16 @@ async function saveImpl(stateProvider: IStateProvider): Promise<number | void> {
required: true
});
cacheId = await cache.saveCache(cachePaths, primaryKey, {
uploadChunkSize: utils.getInputAsInt(Inputs.UploadChunkSize)
});
const enableCrossOsArchive = utils.getInputAsBool(
Inputs.EnableCrossOsArchive
);
cacheId = await cache.saveCache(
cachePaths,
primaryKey,
{ uploadChunkSize: utils.getInputAsInt(Inputs.UploadChunkSize) },
enableCrossOsArchive
);
if (cacheId != -1) {
core.info(`Cache saved with key: ${primaryKey}`);

View File

@ -52,6 +52,14 @@ export function getInputAsInt(
return value;
}
export function getInputAsBool(
name: string,
options?: core.InputOptions
): boolean {
const result = core.getInput(name, options);
return result.toLowerCase() === "true";
}
export function isCacheFeatureAvailable(): boolean {
if (cache.isFeatureAvailable()) {
return true;

View File

@ -13,6 +13,7 @@ interface CacheInput {
path: string;
key: string;
restoreKeys?: string[];
enableCrossOsArchive?: boolean;
}
export function setInputs(input: CacheInput): void {
@ -20,6 +21,11 @@ export function setInputs(input: CacheInput): void {
setInput(Inputs.Key, input.key);
input.restoreKeys &&
setInput(Inputs.RestoreKeys, input.restoreKeys.join("\n"));
input.enableCrossOsArchive !== undefined &&
setInput(
Inputs.EnableCrossOsArchive,
input.enableCrossOsArchive.toString()
);
}
export function clearInputs(): void {
@ -27,4 +33,5 @@ export function clearInputs(): void {
delete process.env[getInputName(Inputs.Key)];
delete process.env[getInputName(Inputs.RestoreKeys)];
delete process.env[getInputName(Inputs.UploadChunkSize)];
delete process.env[getInputName(Inputs.EnableCrossOsArchive)];
}

View File

@ -19,23 +19,10 @@ A cache today is immutable and cannot be updated. But some use cases require the
## Use cache across feature branches
Reusing cache across feature branches is not allowed today to provide cache [isolation](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache). However if both feature branches are from the default branch, a good way to achieve this is to ensure that the default branch has a cache. This cache will then be consumable by both feature branches.
## Improving cache restore performance on Windows/Using cross-os caching
Currently, cache restore is slow on Windows due to tar being inherently slow and the compression algorithm `gzip` in use. `zstd` is the default algorithm in use on linux and macos. It was disabled on Windows due to issues with bsd tar(libarchive), the tar implementation in use on Windows.
To improve cache restore performance, we can re-enable `zstd` as the compression algorithm using the following workaround. Add the following step to your workflow before the cache step:
```yaml
- if: ${{ runner.os == 'Windows' }}
name: Use GNU tar
shell: cmd
run: |
echo "Adding GNU tar to PATH"
echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%"
```
The `cache` action will use GNU tar instead of bsd tar on Windows. This should work on all Github Hosted runners as it is. For self-hosted runners, please ensure you have GNU tar and `zstd` installed.
The above workaround is also needed if you wish to use cross-os caching since difference of compression algorithms will result in different cache versions for the same cache key. So the above workaround will ensure `zstd` is used for caching on all platforms thus resulting in the same cache version for the same cache key.
## Cross OS cache
From `v3.2.3` cache is cross-os compatible when `enableCrossOsArchive` input is passed as true. This means that a cache created on `ubuntu-latest` or `mac-latest` can be used by `windows-latest` and vice versa, provided the workflow which runs on `windows-latest` have input `enableCrossOsArchive` as true. This is useful to cache dependencies which are independent of the runner platform. This will help reduce the consumption of the cache quota and help build for multiple platforms from the same cache. Things to keep in mind while using this feature:
- Only cache those files which are compatible across OSs.
- Caching symlinks might cause issues while restoration as they work differently on different OSs.
## Force deletion of caches overriding default cache eviction policy
Caches have [branch scope restriction](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache) in place. This means that if caches for a specific branch are using a lot of storage quota, it may result into more frequently used caches from `default` branch getting thrashed. For example, if there are many pull requests happening on a repo and are creating caches, these cannot be used in default branch scope but will still occupy a lot of space till they get cleaned up by [eviction policy](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy). But sometime we want to clean them up on a faster cadence so as to ensure default branch is not thrashing. In order to achieve this, [gh-actions-cache cli](https://github.com/actions/gh-actions-cache/) can be used to delete caches for specific branches.