chore: add appium-support to repo

- add `npm start` script to run appium server
- add `npm run doctor` convenience script to run doctor
- renamed `clean` script to `reinstall`
- added new `clean` script to just clean
- updated the "appium from source" doc
This commit is contained in:
Christopher Hiller
2021-05-14 14:11:55 -07:00
parent 399f3f193a
commit 4a203fa876
59 changed files with 5927 additions and 102 deletions
+2 -1
View File
@@ -1,6 +1,7 @@
{
"extends": "@appium/eslint-config-appium",
"overrides": [
{ "files": "packages/fake-driver/**/*", "rules": { "require-await": 0 } }
{ "files": "packages/fake-driver/**/*", "rules": { "require-await": 0 } },
{ "files": "packages/support/**/*", "globals": { "BigInt": "readonly" } }
]
}
@@ -11,10 +11,11 @@ request; for more information on how to run tests, keep reading!
Appium is written in JavaScript, and run with the [Node.js](https://nodejs.org/) engine. Currently
version 6+ is supported. While Node.js can be installed globally on the system,
a version manager is _highly_ recommended.
* NVM - [https://github.com/creationix/nvm](https://github.com/creationix/nvm)
* N - [https://github.com/tj/n](https://github.com/tj/n)
Your Node.js installation will include the [NPM](https://www.npmjs.com/) package manager, which Appium
Your Node.js installation will include the [npm](https://www.npmjs.com/) package manager, which Appium
will need in order to manage dependencies. Appiums supports NPM version 3+.
### Setting up Appium from Source
@@ -26,39 +27,42 @@ instance of an Appium server, and then run your test.
The quick way to get started:
```
```bash
git clone https://github.com/appium/appium.git
cd appium
npm install
npm run build
node .
npm start
```
### Hacking on Appium
Install the [appium-doctor](https://github.com/appium/appium-doctor) tool, and run it to verify all of the
dependencies are set up correctly (since dependencies for building Appium
are different from those for simply running it):
```
npm install -g appium-doctor
appium-doctor --dev
```
Install the Node.js dependencies:
```
First, run
```bash
npm install
```
When pulling new code from GitHub, if there are changes to `package.json` it
is necessary to remove the old dependencies and re-run `npm install`:
Now, you can run [`appium-doctor`](https://npmjs.com/@appium/doctor) to verify
the prerequisites are present (since prerequisites for _building_ Appium
are different from those for simply _running_ it). `@appium/doctor` (and its
command-line executable, `appium-doctor`) lives in the Appium core monorepo,
and can be run via:
```bash
npm run doctor -- --dev
```
rm -rf node_modules && rm -rf package-lock.json && npm install
The `@appium/doctor` package can also be installed globally and used that way:
```bash
npm install --global @appium/doctor
appium-doctor --dev
```
At this point, you will be able to start the Appium server:
```
node .
```bash
npm start
```
See [the server documentation](/docs/en/writing-running-appium/server-args.md)
@@ -71,67 +75,45 @@ and added to system `PATH` environment variable. Also you would need the
android-19+ sdk installed.
From your local repo's command prompt, install/run the following:
Set up Appium by running:
```
rm -rf node_modules && rm -rf package-lock.json && npm install
```bash
npm run reinstall
```
Make sure you have one and only one Android emulator or device running, e.g.,
by running this command in another process (assuming the `emulator` command is
on your path):
```
```bash
emulator -avd <MyAvdName>
```
Now you are ready to run the Appium server via `node .`.
Now you are ready to run the Appium server via `npm start`.
#### Making sure you're up to date
Since Appium uses dev versions of some packages, it often becomes necessary to
install new `npm` packages or update various things. Running `npm install` will
install new packages or update various things. Running `npm install` will
update everything necessary. You will also need to do this when Appium bumps
its version up. Prior to running `npm install` it is recommended to remove
all the old dependencies in the `node_modules` directory:
```bash
npm run clean
```
rm -rf node_modules && rm -rf package-lock.json && npm install
To automatically reinstall, use:
```bash
npm run reinstall
```
### Different packages
Appium is made up of a number of different packages. While it is often possible
to work in a single package, it is also often the case that work, whether fixing
a bug or adding a new feature, requires working on multiple packages simultaneously.
Unfortunately the dependencies installed when running `npm install` are those that
have already been published, so some work is needed to link together local development
versions of the packages that are being worked on.
In the case where one package, `A`, depends on another package, `B`, the following steps
are necessary to link the two:
1. In one terminal, enter into package `B`
```
cd B
```
2. Use [NPM link](https://docs.npmjs.com/cli/link) to create symbolic link to this package
```
npm link
```
3. In another terminal, enter into package `A`
```
cd A
```
4. Use [NPM link](https://docs.npmjs.com/cli/link) to link the dependent package `B` to the development version
```
npm link B
```
Now the version of `B` that `A` uses will be your local version. Remember, however, that
changes made to the JavaScript will only be available when they have been transpiled, so
when you are going to test from package `A`, run `npm run build` in the directory for
package `B`.
Appium is made up of a number of different packages. As of v2.0, the core packages
live in a [_monorepo_](https://github.com/appium/appium) (including this documentation).
The packages themselves live in the `packages` subdirectory. Running `npm install`
will automatically install all dependencies for all packages in this directory by way of
[Lerna](https://lerna.js.org).
### Running Tests
@@ -142,14 +124,14 @@ system is set up properly for the platforms you desire to test on.
Once your system is set up and your code is up to date, you can run unit tests
with:
```
```bash
npm run test
```
You can run functional tests for all supported platforms (after ensuring that
Appium is running in another window with `node .`) with:
Appium is running in another window with `npm start`) with:
```
```bash
npm run e2e-test
```
@@ -174,4 +156,4 @@ the commit from occurring.
Once code is committed and a [pull request](https://help.github.com/articles/about-pull-requests/)
is made to the correct Appium respository on [GitHub](https://github.com/), Appium build system
will run all of the functional tests.
will run all of the functional tests.
+8 -4
View File
@@ -14,21 +14,24 @@
"author": "https://github.com/appium",
"scripts": {
"build": "lerna exec gulp transpile",
"clean": "lerna clean -y && rm -rf node_modules && rm -f package-lock.json && npm install",
"clean": "lerna clean -y && rm -rf node_modules && rm -f package-lock.json && rm -rf packages/*/build",
"coverage": "lerna exec gulp coveralls",
"doctor": "node ./packages/doctor",
"e2e-test": "lerna exec gulp e2e-test",
"generate-docs": "lerna run generate-docs --scope appium",
"postinstall": "lerna bootstrap && lerna exec gulp prepublish",
"lint": "lerna exec --parallel gulp lint && lerna run lint --scope @appium/eslint-config-appium",
"lint:fix": "lerna exec --parallel gulp lint -- --fix",
"postinstall": "lerna bootstrap",
"precommit-msg": "echo 'Pre-commit checks...' && exit 0",
"precommit-test": "REPORTER=dot gulp -f packages/appium/gulpfile.js once",
"prepare": "lerna exec gulp prepublish",
"reinstall": "npm run clean && npm install",
"start": "node ./packages/appium",
"test": "lerna exec gulp once",
"test:appium": "lerna exec gulp once --scope appium",
"test:doctor": "lerna exec gulp once --scope @appium/doctor",
"test:fake-driver": "lerna exec gulp once --scope @appium/fake-driver",
"test:gulp-plugins": "lerna exec gulp once --scope @appium/gulp-plugins",
"test:support": "lerna exec gulp once --scope @appium/support",
"watch": "lerna exec watch"
},
"pre-commit": [
@@ -37,10 +40,10 @@
],
"devDependencies": {
"@appium/eslint-config-appium": "file:./packages/eslint-config-appium",
"@appium/fake-plugin": "^1.0.0",
"@babel/plugin-proposal-class-properties": "^7.13.0",
"@babel/register": "^7.13.16",
"appium-test-support": "^1.3.3",
"asyncbox": "^2.8.0",
"chai": "4.x",
"chai-as-promised": "7.x",
"eslint-find-rules": "^3.6.1",
@@ -48,6 +51,7 @@
"gulp": "^4.0.0",
"handlebars": "^4.2.0",
"lerna": "^4.0.0",
"mjpeg-server": "^0.3.0",
"mocha": "^8.0.1",
"pre-commit": "1.x",
"sinon": "^9.0.0",
+1 -1
View File
@@ -57,7 +57,7 @@
"@appium/base-plugin": "^1.0.0",
"@babel/runtime": "^7.6.0",
"appium-base-driver": "^8.0.0-beta.6",
"appium-support": "2.x",
"@appium/support": "2.x",
"argparse": "^2.0.1",
"async-lock": "^1.0.0",
"asyncbox": "2.x",
+15 -15
View File
@@ -1,23 +1,19 @@
{
"name": "@appium/doctor",
"version": "1.16.0",
"description": "Test environment for fitness to run Appium",
"keywords": [
"appium"
],
"version": "1.16.0",
"author": "appium",
"license": "Apache-2.0",
"bugs": {
"url": "https://github.com/appium/appium/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/appium/appium.git"
},
"bugs": {
"url": "https://github.com/appium/appium/issues"
},
"engines": {
"node": ">=10",
"npm": ">=6"
},
"license": "Apache-2.0",
"author": "appium",
"main": "./appium-doctor.js",
"bin": {
"appium-doctor": "./appium-doctor.js"
@@ -34,10 +30,14 @@
"build/index.js",
"build/lib"
],
"pre-commit": [
"precommit-msg",
"precommit-test"
],
"dependencies": {
"@appium/support": "^2.5.0",
"@babel/runtime": "^7.0.0",
"appium-adb": "^8.4.0",
"appium-support": "^2.5.0",
"authorize-ios": "^1.0.3",
"bluebird": "^3.5.1",
"colors": "^1.1.2",
@@ -47,11 +47,11 @@
"teen_process": "^1.3.1",
"yargs": "^17.0.0"
},
"pre-commit": [
"precommit-msg",
"precommit-test"
],
"devDependencies": {
"@appium/gulp-plugins": "^5.1.1"
},
"engines": {
"node": ">=10",
"npm": ">=6"
}
}
+19 -16
View File
@@ -2,29 +2,32 @@
"name": "@appium/eslint-config-appium",
"version": "4.7.0",
"description": "Shared config for Appium's eslint style",
"repository": {
"type": "git",
"url": "https://github.com/appium/appium.git"
},
"main": "index.js",
"files": [
"index.js"
],
"scripts": {
"eslint:find:current-rules": "eslint-find-rules -c",
"eslint:find:all-rules": "eslint-find-rules -a",
"eslint:find:plugin-rules": "eslint-find-rules -p",
"eslint:find:unused-rules": "eslint-find-rules -u -n",
"lint": "eslint index.js"
},
"keywords": [
"eslint",
"eslintconfig",
"appium",
"es2015"
],
"author": "appium",
"repository": {
"type": "git",
"url": "https://github.com/appium/appium.git"
},
"bugs": {
"url": "https://github.com/appium/appium/issues"
},
"license": "Apache-2.0",
"author": "appium",
"main": "index.js",
"files": [
"index.js"
],
"scripts": {
"eslint:find:all-rules": "eslint-find-rules -a",
"eslint:find:current-rules": "eslint-find-rules -c",
"eslint:find:plugin-rules": "eslint-find-rules -p",
"eslint:find:unused-rules": "eslint-find-rules -u -n",
"lint": "eslint index.js"
},
"dependencies": {
"@babel/core": "^7.12.1",
"@babel/eslint-parser": "^7.12.1",
+4 -4
View File
@@ -30,9 +30,9 @@
"screen.png"
],
"dependencies": {
"@appium/support": "^2.11.1",
"@babel/runtime": "^7.0.0",
"appium-base-driver": "^7.0.0",
"appium-support": "^2.11.1",
"asyncbox": "^2.3.2",
"bluebird": "^3.5.1",
"lodash": "^4.17.4",
@@ -40,6 +40,9 @@
"xmldom": "^0.x",
"xpath": "^0.x"
},
"devDependencies": {
"@appium/gulp-plugins": "^5.5.0"
},
"engines": {
"node": ">=12.x"
},
@@ -58,8 +61,5 @@
},
"greenkeeper": {
"ignore": []
},
"devDependencies": {
"@appium/gulp-plugins": "^5.5.0"
}
}
+150
View File
@@ -0,0 +1,150 @@
# appium-support
Utility functions used to support libs used across appium packages.
`npm install appium-support`
Appium, as of version 1.5 is all based on promises, so this module provides promise wrappers for some common operations.
Most notably, we wrap `fs` for file system commands. Note the addition of `hasAccess`.
Also note that `fs.mkdir` doesn't throw an error if the directory already exists, it will just resolve.
### Methods
- system.isWindows
- system.isMac
- system.isLinux
- system.isOSWin64
- system.arch
- system.macOsxVersion
- util.hasContent - returns true if input string has content
- util.hasValue - returns true if input value is not undefined and no null
- util.escapeSpace
- util.escapeSpecialChars
- util.localIp
- util.cancellableDelay
- util.multiResolve - multiple path.resolve
- util.unwrapElement - parse an element ID from an element object: e.g.: `{ELEMENT: 123, "element-6066-11e4-a52e-4f735466cecf": 123}` returns `123`
- util.wrapElement - convert an element ID to an element object of the form: e.g.: `123` returns `{ELEMENT: 123, "element-6066-11e4-a52e-4f735466cecf": 123}`
- *fs.hasAccess* - use this over `fs.access`
- *fs.exists* - calls `fs.hasAccess`
- *fs.rimraf*
- *fs.mkdir* - doesn't throw an error if directory already exists
- *fs.copyFile*
- fs.open
- fs.close
- fs.access
- fs.readFile
- fs.writeFile
- fs.write
- fs.readlink
- fs.chmod
- fs.unlink
- fs.readdir
- fs.stat
- fs.rename
- *fs.md5*
- plist.parsePlistFile
- plist.updatePlistFile
- mkdirp
- logger
- zip.extractAllTo - Extracts contents of a zipfile to a directory
- zip.readEntries - Reads entries (files and directories) of a zipfile
- zip.toInMemoryZip - Converts a directory into a base64 zipfile
## logger
Basic logger defaulting to [npmlog](https://github.com/npm/npmlog) with special consideration for running
tests (doesn't output logs when run with `_TESTING=1` in the env).
### Logging levels
There are a number of levels, exposed as methods on the log object, at which logging can be made. The built-in ones correspond to those of [npmlog](https://github.com/npm/npmlog#loglevelprefix-message-), and are:
`silly`, `verbose`, `info`, `http`, `warn`, and `error`. In addition there is a `debug` level.
The default threshold level is `verbose`.
The logged output, by default, will be `level prefix message`. So
```js
import { logger } from 'appium-support';
let log = logger.getLogger('mymodule');
log.warn('a warning');`
```
Will produce
```shell
warn mymodule a warning
```
### Environment variables
There are two environment variable flags that affect the way `appium-base-driver` `logger` works.
`_TESTING`
- `_TESTING=1` stops output of logs when set to `1`.
`_FORCE_LOGS`
- This flag, when set to `1`, reverses the `_TESTING`
### Usage
`log.level`
- get and set the threshold level at which to display the logs. Any logs at or above this level will be displayed. The special level silent will prevent anything from being displayed ever. See [npmlog#level](https://github.com/npm/npmlog#loglevel).
`log[level](message)`
- logs to `level`
```js
import { logger } from 'appium-support';
let log = logger.getLogger('mymodule');
log.info('hi!');
// => info mymodule hi!
```
`log.unwrap()`
- retrieves the underlying [npmlog](https://github.com/npm/npmlog) object, in order to manage how logging is done at a low level (e.g., changing output streams, retrieving an array of messages, adding log levels, etc.).
```js
import { getLogger } from 'appium-base-driver';
let log = getLogger('mymodule');
log.info('hi!');
let npmlogger = log.unwrap();
// any `npmlog` methods
let logs = npmlogger.record;
// logs === [ { id: 0, level: 'info', prefix: 'mymodule', message: 'hi!', messageRaw: [ 'hi!' ] }]
```
`log.errorAndThrow(error)`
- logs the error passed in, at `error` level, and then throws the error. If the error passed in is not an instance of [Error](https://nodejs.org/api/errors.html#errors_class_error) (either directly, or a subclass of `Error`) it will be wrapped in a generic `Error` object.
```js
import { getLogger } from 'appium-base-driver';
let log = getLogger('mymodule');
// previously there would be two lines
log.error('This is an error');
throw new Error('This is an error');
// now is compacted
log.errorAndThrow('This is an error');
```
+17
View File
@@ -0,0 +1,17 @@
'use strict';
const gulp = require('gulp');
const boilerplate = require('@appium/gulp-plugins').boilerplate.use(gulp);
boilerplate({
build: 'appium-support',
coverage: {
files: [
'./build/test/**/*-specs.js',
'!./build/test/assets/**',
'!./build/test/images/**',
'!./build/test/**/*-e2e-specs.js'
],
verbose: true,
},
});
+28
View File
@@ -0,0 +1,28 @@
import * as tempDir from './lib/tempdir';
import * as system from './lib/system';
import * as util from './lib/util';
import * as fsIndex from './lib/fs';
import * as net from './lib/net';
import * as plist from './lib/plist';
import * as mkdirpIndex from './lib/mkdirp';
import * as logger from './lib/logging';
import * as process from './lib/process';
import * as zip from './lib/zip';
import * as imageUtil from './lib/image-util';
import * as mjpeg from './lib/mjpeg';
import * as node from './lib/node';
import * as timing from './lib/timing';
const { fs } = fsIndex;
const { cancellableDelay } = util;
const { mkdirp } = mkdirpIndex;
export {
tempDir, system, util, fs, cancellableDelay, plist, mkdirp, logger, process,
zip, imageUtil, net, mjpeg, node, timing,
};
export default {
tempDir, system, util, fs, cancellableDelay, plist, mkdirp, logger, process,
zip, imageUtil, net, mjpeg, node, timing,
};
+174
View File
@@ -0,0 +1,174 @@
// jshint ignore: start
import _fs from 'fs';
import rimraf from 'rimraf';
import ncp from 'ncp';
import B from 'bluebird';
import mv from 'mv';
import which from 'which';
import glob from 'glob';
import crypto from 'crypto';
import klaw from 'klaw';
import sanitize from 'sanitize-filename';
import { pluralize } from './util';
import log from './logger';
import Timer from './timing';
const mkdirAsync = B.promisify(_fs.mkdir);
const ncpAsync = B.promisify(ncp);
const fs = {
async hasAccess (path) {
try {
await this.access(path, _fs.R_OK);
} catch (err) {
return false;
}
return true;
},
exists (path) { return this.hasAccess(path); },
rimraf: B.promisify(rimraf),
rimrafSync: rimraf.sync.bind(rimraf),
async mkdir (...args) {
try {
return await mkdirAsync(...args);
} catch (err) {
if (err && err.code !== 'EEXIST') {
throw err;
}
}
},
async copyFile (source, destination, ...otherArgs) {
if (!await this.hasAccess(source)) {
throw new Error(`The file at '${source}' does not exist or is not accessible`);
}
return await ncpAsync(source, destination, ...otherArgs);
},
async md5 (filePath) {
return await this.hash(filePath, 'md5');
},
mv: B.promisify(mv),
which: B.promisify(which),
glob: B.promisify(glob),
sanitizeName: sanitize,
async hash (filePath, algorithm = 'sha1') {
return await new B((resolve, reject) => {
const fileHash = crypto.createHash(algorithm);
const readStream = _fs.createReadStream(filePath);
readStream.on('error', (e) => reject(
new Error(`Cannot calculate ${algorithm} hash for '${filePath}'. Original error: ${e.message}`)));
readStream.on('data', (chunk) => fileHash.update(chunk));
readStream.on('end', () => resolve(fileHash.digest('hex')));
});
},
/** The callback function which will be called during the directory walking
* @name WalkDirCallback
* @function
* @param {string} itemPath The path of the file or folder
* @param {boolean} isDirectory Shows if it is a directory or a file
* @return {boolean} return true if you want to stop walking
*/
/**
* Walks a directory given according to the parameters given. The callback will be invoked with a path joined with the dir parameter
* @param {string} dir Directory path where we will start walking
* @param {boolean} recursive Set it to true if you want to continue walking sub directories
* @param {WalkDirCallback} callback The callback to be called when a new path is found
* @throws {Error} If the `dir` parameter contains a path to an invalid folder
* @return {?string} returns the found path or null if the item was not found
*/
async walkDir (dir, recursive, callback) { //eslint-disable-line promise/prefer-await-to-callbacks
let isValidRoot = false;
let errMsg = null;
try {
isValidRoot = (await fs.stat(dir)).isDirectory();
} catch (e) {
errMsg = e.message;
}
if (!isValidRoot) {
throw Error(`'${dir}' is not a valid root directory` + (errMsg ? `. Original error: ${errMsg}` : ''));
}
let walker;
let fileCount = 0;
let directoryCount = 0;
const timer = new Timer().start();
return await new B(function (resolve, reject) {
let lastFileProcessed = B.resolve();
walker = klaw(dir, {
depthLimit: recursive ? -1 : 0,
});
walker.on('data', function (item) {
walker.pause();
if (!item.stats.isDirectory()) {
fileCount++;
} else {
directoryCount++;
}
// eslint-disable-next-line promise/prefer-await-to-callbacks
lastFileProcessed = B.try(async () => await callback(item.path, item.stats.isDirectory()))
.then(function (done = false) {
if (done) {
resolve(item.path);
} else {
walker.resume();
}
})
.catch(reject);
})
.on('error', function (err, item) {
log.warn(`Got an error while walking '${item.path}': ${err.message}`);
// klaw cannot get back from an ENOENT error
if (err.code === 'ENOENT') {
log.warn('All files may not have been accessed');
reject(err);
}
})
.on('end', function () {
lastFileProcessed
.then(resolve)
.catch(function (err) {
log.warn(`Unexpected error: ${err.message}`);
reject(err);
});
});
}).finally(function () {
log.debug(`Traversed ${pluralize('directory', directoryCount, true)} ` +
`and ${pluralize('file', fileCount, true)} ` +
`in ${timer.getDuration().asMilliSeconds.toFixed(0)}ms`);
if (walker) {
walker.destroy();
}
});
}
};
// add the supported `fs` functions
const simples = [
'open', 'close', 'access', 'readFile', 'writeFile', 'write', 'read',
'readlink', 'chmod', 'unlink', 'readdir', 'stat', 'rename', 'lstat',
'appendFile', 'realpath', 'symlink',
];
for (const s of simples) {
fs[s] = B.promisify(_fs[s]);
}
const syncFunctions = [
'createReadStream',
'createWriteStream',
];
for (const s of syncFunctions) {
fs[s] = _fs[s];
}
// add the constants from `fs`
const constants = [
'F_OK', 'R_OK', 'W_OK', 'X_OK', 'constants',
];
for (const c of constants) {
fs[c] = _fs[c];
}
export { fs };
export default fs;
+707
View File
@@ -0,0 +1,707 @@
import _ from 'lodash';
import Jimp from 'jimp';
import { Buffer } from 'buffer';
import { PNG } from 'pngjs';
import B from 'bluebird';
import { hasValue } from './util';
import log from './logger';
import { requirePackage } from './node';
const { MIME_JPEG, MIME_PNG, MIME_BMP } = Jimp;
let cv = null;
/**
* @typedef {Object} Region
* @property {number} left - The offset from the left side
* @property {number} top - The offset from the top
* @property {number} width - The width
* @property {number} height - The height
*/
/**
* @typedef {Object} Point
* @property {number} x - The x coordinate
* @property {number} y - The y coordinate
*/
/**
* @typedef {Object} Rect
* @property {number} x - The top left coordinate
* @property {number} y - The bottom right coordinate
* @property {number} width - The width
* @property {number} height - The height
*/
const BYTES_IN_PIXEL_BLOCK = 4;
const SCANLINE_FILTER_METHOD = 4;
const DEFAULT_MATCH_THRESHOLD = 0.5;
const MATCH_NEIGHBOUR_THRESHOLD = 10;
const AVAILABLE_DETECTORS = [
'AKAZE',
'AGAST',
'BRISK',
'FAST',
'GFTT',
'KAZE',
'MSER',
'SIFT',
'ORB',
];
const AVAILABLE_MATCHING_FUNCTIONS = [
'FlannBased',
'BruteForce',
'BruteForceL1',
'BruteForceHamming',
'BruteForceHammingLut',
'BruteForceSL2',
];
const MATCHING_METHODS = [
'TM_CCOEFF',
'TM_CCOEFF_NORMED',
'TM_CCORR',
'TM_CCORR_NORMED',
'TM_SQDIFF',
'TM_SQDIFF_NORMED',
];
const DEFAULT_MATCHING_METHOD = 'TM_CCOEFF_NORMED';
/**
* Transforms matching method name to the actual
* constant value from OpenCV library
*
* @param {string} name One of supported method names
* (see MATCHING_METHODS array above)
* @returns {number} The method value
* @throws {Error} if an unsupported method name is given
*/
function toMatchingMethod (name) {
if (!MATCHING_METHODS.includes(name)) {
throw new Error(`The matching method '${name}' is unknown. ` +
`Only the following matching methods are supported: ${MATCHING_METHODS}`);
}
return cv[name];
}
/**
* Utility function to get a Jimp image object from buffer or base64 data. Jimp
* is a great library however it does IO in the constructor so it's not
* convenient for our async/await model.
*
* @param {Buffer|string} data - binary image buffer or base64-encoded image
* string
* @returns {Jimp} - the jimp image object
*/
async function getJimpImage (data) {
return await new B((resolve, reject) => {
if (!_.isString(data) && !_.isBuffer(data)) {
return reject(new Error('Must initialize jimp object with string or buffer'));
}
// if data is a string, assume it is a base64-encoded image
if (_.isString(data)) {
data = Buffer.from(data, 'base64');
}
new Jimp(data, (err, imgObj) => {
if (err) {
return reject(err);
}
if (!imgObj) {
return reject(new Error('Could not create jimp image from that data'));
}
imgObj._getBuffer = imgObj.getBuffer.bind(imgObj);
imgObj.getBuffer = B.promisify(imgObj._getBuffer, {context: imgObj});
resolve(imgObj);
});
});
}
/**
* @throws {Error} If opencv4nodejs module is not installed or cannot be loaded
*/
async function initOpenCV () {
if (cv) {
return;
}
log.debug(`Initializing opencv`);
try {
cv = await requirePackage('opencv4nodejs');
} catch (err) {
log.warn(`Unable to load 'opencv4nodejs': ${err.message}`);
}
if (!cv) {
throw new Error(`'opencv4nodejs' module is required to use OpenCV features. ` +
`Please install it first ('npm i -g opencv4nodejs') and restart Appium. ` +
'Read https://github.com/justadudewhohacks/opencv4nodejs#how-to-install for more details on this topic.');
}
}
/**
* @typedef {Object} MatchComputationResult
* @property {cv.DescriptorMatch} desciptor - OpenCV match descriptor
* @property {Array<cv.KeyPoint>} keyPoints - The array of key points
*/
/**
* Calculates an OpenCV match descriptor of an image, which can be used
* for brute-force matching.
* Read https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html
* for more details.
*
* @param {cv.Mat} img Image data
* @param {cv.FeatureDetector} detector OpenCV feature detector instance
*
* @returns {MatchComputationResult}
*/
async function detectAndCompute (img, detector) {
const keyPoints = await detector.detectAsync(img);
const descriptor = await detector.computeAsync(img, keyPoints);
return {
keyPoints,
descriptor
};
}
/**
* Calculated the bounding rect coordinates for the array of matching points
*
* @param {Array<Point>} matchedPoints Array of matching points
* @returns {Rect} The matching bounding rect or a zero rect if no match
* can be found.
*/
function calculateMatchedRect (matchedPoints) {
if (matchedPoints.length < 2) {
return {
x: 0,
y: 0,
width: 0,
height: 0
};
}
const pointsSortedByDistance = matchedPoints
.map((point) => [Math.sqrt(point.x * point.x + point.y * point.y), point])
.sort((pair1, pair2) => pair1[0] >= pair2[0])
.map((pair) => pair[1]);
const firstPoint = _.head(pointsSortedByDistance);
const lastPoint = _.last(pointsSortedByDistance);
const topLeftPoint = {
x: firstPoint.x <= lastPoint.x ? firstPoint.x : lastPoint.x,
y: firstPoint.y <= lastPoint.y ? firstPoint.y : lastPoint.y,
};
const bottomRightPoint = {
x: firstPoint.x >= lastPoint.x ? firstPoint.x : lastPoint.x,
y: firstPoint.y >= lastPoint.y ? firstPoint.y : lastPoint.y,
};
return {
x: topLeftPoint.x,
y: topLeftPoint.y,
width: bottomRightPoint.x - topLeftPoint.x,
height: bottomRightPoint.y - topLeftPoint.y
};
}
/**
* Draws a rectanngle on the given image matrix
*
* @param {cv.Mat} mat The source image
* @param {Rect} region The region to highlight
*
* @returns {cv.Mat} The same image with the rectangle on it
*/
function highlightRegion (mat, region) {
if (region.width <= 0 || region.height <= 0) {
return;
}
// highlight in red
const color = new cv.Vec(0, 0, 255);
const thickness = 2;
mat.drawRectangle(new cv.Rect(region.x, region.y, region.width, region.height), color, thickness, cv.LINE_8);
return mat;
}
/**
* @typedef {Object} MatchingOptions
* @property {?string} detectorName ['ORB'] One of possible OpenCV feature detector names
* from `AVAILABLE_DETECTORS` array.
* Some of these methods (FAST, AGAST, GFTT, FAST, SIFT and MSER) are not available
* in the default OpenCV installation and have to be enabled manually before
* library compilation.
* @property {?string} matchFunc ['BruteForce'] The name of the matching function.
* Should be one of `AVAILABLE_MATCHING_FUNCTIONS` array.
* @property {?number|Function} goodMatchesFactor The maximum count of "good" matches
* (e. g. with minimal distances) or a function, which accepts 3 arguments: the current distance,
* minimal distance, maximum distance and returns true or false to include or exclude the match.
* @property {?boolean} visualize [false] Whether to return the resulting visalization
* as an image (useful for debugging purposes)
*/
/**
* @typedef {Object} MatchingResult
* @property {number} count The count of matched edges on both images.
* The more matching edges there are no both images the more similar they are.
* @property {number} totalCount The total count of matched edges on both images.
* It is equal to `count` if `goodMatchesFactor` does not limit the matches,
* otherwise it contains the total count of matches before `goodMatchesFactor` is
* applied.
* @property {?Buffer} visualization The visualization of the matching result
* represented as PNG image buffer. This visualization looks like
* https://user-images.githubusercontent.com/31125521/29702731-c79e3142-8972-11e7-947e-db109d415469.jpg
* @property {Array<Point>} points1 The array of matching points on the first image
* @property {Rect} rect1 The bounding rect for the `matchedPoints1` set or a zero rect
* if not enough matching points are found
* @property {Array<Point>} points2 The array of matching points on the second image
* @property {Rect} rect2 The bounding rect for the `matchedPoints2` set or a zero rect
* if not enough matching points are found
*/
/**
* Calculates the count of common edges between two images.
* The images might be rotated or resized relatively to each other.
*
* @param {Buffer} img1Data The data of the first image packed into a NodeJS buffer
* @param {Buffer} img2Data The data of the second image packed into a NodeJS buffer
* @param {?MatchingOptions} options [{}] Set of matching options
*
* @returns {MatchingResult} Maching result
* @throws {Error} If `detectorName` value is unknown.
*/
async function getImagesMatches (img1Data, img2Data, options = {}) {
await initOpenCV();
const {detectorName = 'ORB', visualize = false,
goodMatchesFactor, matchFunc = 'BruteForce'} = options;
if (!_.includes(AVAILABLE_DETECTORS, detectorName)) {
throw new Error(`'${detectorName}' detector is unknown. ` +
`Only ${JSON.stringify(AVAILABLE_DETECTORS)} detectors are supported.`);
}
if (!_.includes(AVAILABLE_MATCHING_FUNCTIONS, matchFunc)) {
throw new Error(`'${matchFunc}' matching function is unknown. ` +
`Only ${JSON.stringify(AVAILABLE_MATCHING_FUNCTIONS)} matching functions are supported.`);
}
const detector = new cv[`${detectorName}Detector`]();
const [img1, img2] = await B.all([
cv.imdecodeAsync(img1Data),
cv.imdecodeAsync(img2Data)
]);
const [result1, result2] = await B.all([
detectAndCompute(img1, detector),
detectAndCompute(img2, detector)
]);
let matches = [];
try {
matches = await cv[`match${matchFunc}Async`](result1.descriptor, result2.descriptor);
} catch (e) {
throw new Error(`Cannot find any matches between the given images. Try another detection algorithm. ` +
` Original error: ${e}`);
}
const totalCount = matches.length;
if (hasValue(goodMatchesFactor)) {
if (_.isFunction(goodMatchesFactor)) {
const distances = matches.map((match) => match.distance);
const minDistance = _.min(distances);
const maxDistance = _.max(distances);
matches = matches
.filter((match) => goodMatchesFactor(match.distance, minDistance, maxDistance));
} else {
if (matches.length > goodMatchesFactor) {
matches = matches
.sort((match1, match2) => match1.distance - match2.distance)
.slice(0, goodMatchesFactor);
}
}
}
const extractPoint = (keyPoints, indexPropertyName) => (match) => {
const {pt, point} = keyPoints[match[indexPropertyName]];
// https://github.com/justadudewhohacks/opencv4nodejs/issues/584
return (pt || point);
};
const points1 = matches.map(extractPoint(result1.keyPoints, 'queryIdx'));
const rect1 = calculateMatchedRect(points1);
const points2 = matches.map(extractPoint(result2.keyPoints, 'trainIdx'));
const rect2 = calculateMatchedRect(points2);
const result = {
points1,
rect1,
points2,
rect2,
totalCount,
count: matches.length,
};
if (visualize) {
const visualization = cv.drawMatches(img1, img2, result1.keyPoints, result2.keyPoints, matches);
highlightRegion(visualization, rect1);
highlightRegion(visualization, {
x: img1.cols + rect2.x,
y: rect2.y,
width: rect2.width,
height: rect2.height
});
result.visualization = await cv.imencodeAsync('.png', visualization);
}
return result;
}
/**
* @typedef {Object} SimilarityOptions
* @property {?boolean} visualize [false] Whether to return the resulting visalization
* as an image (useful for debugging purposes)
* @property {string} method [TM_CCOEFF_NORMED] The name of the template matching method.
* Acceptable values are:
* - TM_CCOEFF
* - TM_CCOEFF_NORMED (default)
* - TM_CCORR
* - TM_CCORR_NORMED
* - TM_SQDIFF
* - TM_SQDIFF_NORMED
* Read https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
* for more details.
*/
/**
* @typedef {Object} SimilarityResult
* @property {number} score The similarity score as a float number in range [0.0, 1.0].
* 1.0 is the highest score (means both images are totally equal).
* @property {?Buffer} visualization The visualization of the matching result
* represented as PNG image buffer. This image includes both input pictures where
* difference regions are highlighted with rectangles.
*/
/**
* Calculates the similarity score between two images.
* It is expected, that both images have the same resolution.
*
* @param {Buffer} img1Data The data of the first image packed into a NodeJS buffer
* @param {Buffer} img2Data The data of the second image packed into a NodeJS buffer
* @param {?SimilarityOptions} options [{}] Set of similarity calculation options
*
* @returns {SimilarityResult} The calculation result
* @throws {Error} If the given images have different resolution.
*/
async function getImagesSimilarity (img1Data, img2Data, options = {}) {
await initOpenCV();
const {
method = DEFAULT_MATCHING_METHOD,
visualize = false,
} = options;
let [template, reference] = await B.all([
cv.imdecodeAsync(img1Data),
cv.imdecodeAsync(img2Data)
]);
if (template.rows !== reference.rows || template.cols !== reference.cols) {
throw new Error('Both images are expected to have the same size in order to ' +
'calculate the similarity score.');
}
[template, reference] = await B.all([
template.convertToAsync(cv.CV_8UC3),
reference.convertToAsync(cv.CV_8UC3)
]);
let matched;
try {
matched = await reference.matchTemplateAsync(template, toMatchingMethod(method));
} catch (e) {
throw new Error(`The reference image did not match to the template one. Original error: ${e.message}`);
}
const minMax = await matched.minMaxLocAsync();
const result = {
score: minMax.maxVal
};
if (visualize) {
const resultMat = new cv.Mat(template.rows, template.cols * 2, cv.CV_8UC3);
await B.all([
reference.copyToAsync(
resultMat.getRegion(new cv.Rect(0, 0, reference.cols, reference.rows))),
template.copyToAsync(
resultMat.getRegion(new cv.Rect(reference.cols, 0, template.cols, template.rows)))
]);
let mask = reference.absdiff(template);
mask = await mask.cvtColorAsync(cv.COLOR_BGR2GRAY);
let contours = [];
try {
mask = await mask.thresholdAsync(128, 255, cv.THRESH_BINARY | cv.THRESH_OTSU);
contours = await mask.findContoursAsync(cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE);
} catch (ign) {
// No contours can be found, which means, most likely, that images are equal
}
for (const contour of contours) {
const boundingRect = contour.boundingRect();
highlightRegion(resultMat, boundingRect);
highlightRegion(resultMat, {
x: reference.cols + boundingRect.x,
y: boundingRect.y,
width: boundingRect.width,
height: boundingRect.height
});
}
result.visualization = await cv.imencodeAsync('.png', resultMat);
}
return result;
}
/**
* @typedef {Object} OccurrenceOptions
* @property {?boolean} visualize [false] Whether to return the resulting visalization
* as an image (useful for debugging purposes)
* @property {?float} threshold [0.5] At what normalized threshold to reject
* a match
* @property {?float} multiple [false] find multiple matches in the image
* @property {?number} matchNeighbourThreshold [10] The pixel distance between matches we consider
* to be part of the same template match
*/
/**
* @typedef {Object} OccurrenceResult
* @property {Rect} rect The region of the partial image occurence
* on the full image
* @property {?Buffer} visualization The visualization of the matching result
* represented as PNG image buffer. On this image the matching
* region is highlighted with a rectangle. If the multiple option is passed,
* all results are highlighted here.
* @property {number} score The similarity score as a float number in range [0.0, 1.0].
* 1.0 is the highest score (means both images are totally equal).
* @property {Array<OccurrenceResult>} multiple The array of matching OccurenceResults
* - only when multiple option is passed
* @property {string} method [TM_CCOEFF_NORMED] The name of the template matching method.
* Acceptable values are:
* - TM_CCOEFF
* - TM_CCOEFF_NORMED (default)
* - TM_CCORR
* - TM_CCORR_NORMED
* - TM_SQDIFF
* - TM_SQDIFF_NORMED
* Read https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
* for more details.
*/
/**
* Calculates the occurrence position of a partial image in the full
* image.
*
* @param {Buffer} fullImgData The data of the full image packed into a NodeJS buffer
* @param {Buffer} partialImgData The data of the partial image packed into a NodeJS buffer
* @param {?OccurrenceOptions} options [{}] Set of occurrence calculation options
*
* @returns {OccurrenceResult}
* @throws {Error} If no occurrences of the partial image can be found in the full image
*/
async function getImageOccurrence (fullImgData, partialImgData, options = {}) {
await initOpenCV();
const {
visualize = false,
threshold = DEFAULT_MATCH_THRESHOLD,
multiple = false,
matchNeighbourThreshold = MATCH_NEIGHBOUR_THRESHOLD,
method = DEFAULT_MATCHING_METHOD,
} = options;
const [fullImg, partialImg] = await B.all([
cv.imdecodeAsync(fullImgData),
cv.imdecodeAsync(partialImgData)
]);
const results = [];
let visualization = null;
try {
const matched = await fullImg.matchTemplateAsync(partialImg, toMatchingMethod(method));
const minMax = await matched.minMaxLocAsync();
if (multiple) {
const nonZeroMatchResults = matched.threshold(threshold, 1, cv.THRESH_BINARY)
.convertTo(cv.CV_8U)
.findNonZero();
const matches = filterNearMatches(nonZeroMatchResults, matchNeighbourThreshold);
for (const {x, y} of matches) {
results.push({
score: matched.at(y, x),
rect: {
x, y,
width: partialImg.cols,
height: partialImg.rows
}
});
}
} else if (minMax.maxVal >= threshold) {
const {x, y} = method.includes('SQDIFF') ? minMax.minLoc : minMax.maxLoc;
results.push({
score: minMax.maxVal,
rect: {
x, y,
width: partialImg.cols,
height: partialImg.rows
}
});
}
if (_.isEmpty(results)) {
// Below error message, `Cannot find any occurrences` is referenced in find by image
throw new Error(`Match threshold: ${threshold}. Highest match value ` +
`found was ${minMax.maxVal}`);
}
} catch (e) {
// Below error message, `Cannot find any occurrences` is referenced in find by image
throw new Error(`Cannot find any occurrences of the partial image in the full image. ` +
`Original error: ${e.message}`);
}
if (visualize) {
const fullHighlightedImage = fullImg.copy();
for (const result of results) {
const singleHighlightedImage = fullImg.copy();
highlightRegion(singleHighlightedImage, result.rect);
highlightRegion(fullHighlightedImage, result.rect);
result.visualization = await cv.imencodeAsync('.png', singleHighlightedImage);
}
visualization = await cv.imencodeAsync('.png', fullHighlightedImage);
}
return {
rect: results[0].rect,
score: results[0].score,
visualization,
multiple: results
};
}
/**
* Filter out match results which have a matched neighbour
*
* @param {Array<Point>} nonZeroMatchResults matrix of image match results
* @param {number} matchNeighbourThreshold The pixel distance within which we
* consider an element being a neighbour of an existing match
* @return {Array<Point>} the filtered array of matched points
*/
function filterNearMatches (nonZeroMatchResults, matchNeighbourThreshold) {
return nonZeroMatchResults.reduce((acc, element) => {
if (!acc.some((match) => distance(match, element) <= matchNeighbourThreshold)) {
acc.push(element);
}
return acc;
}, []);
}
/**
* Find the distance between two points
*
* @param {Point} point1 The first point
* @param {Point} point2 The second point
* @return {number} the distance
*/
function distance (point1, point2) {
const a2 = Math.pow((point1.x - point2.x), 2);
const b2 = Math.pow((point1.y - point2.y), 2);
return Math.sqrt(a2 + b2);
}
/**
* Crop the image by given rectangle (use base64 string as input and output)
*
* @param {string} base64Image The string with base64 encoded image
* @param {Region} rect The selected region of image
* @return {string} base64 encoded string of cropped image
*/
async function cropBase64Image (base64Image, rect) {
const image = await base64ToImage(base64Image);
cropImage(image, rect);
return await imageToBase64(image);
}
/**
* Create a pngjs image from given base64 image
*
* @param {string} base64Image The string with base64 encoded image
* @return {PNG} The image object
*/
async function base64ToImage (base64Image) {
const imageBuffer = Buffer.from(base64Image, 'base64');
return await new B((resolve, reject) => {
const image = new PNG({filterType: SCANLINE_FILTER_METHOD});
image.parse(imageBuffer, (err, image) => { // eslint-disable-line promise/prefer-await-to-callbacks
if (err) {
return reject(err);
}
resolve(image);
});
});
}
/**
* Create a base64 string for given image object
*
* @param {PNG} image The image object
* @return {string} The string with base64 encoded image
*/
async function imageToBase64 (image) {
return await new B((resolve, reject) => {
const chunks = [];
image.pack()
.on('data', (chunk) => chunks.push(chunk)).on('end', () => {
resolve(Buffer.concat(chunks).toString('base64'));
})
.on('error', (err) => { // eslint-disable-line promise/prefer-await-to-callbacks
reject(err);
});
});
}
/**
* Crop the image by given rectangle
*
* @param {PNG} image The image to mutate by cropping
* @param {Region} rect The selected region of image
*/
function cropImage (image, rect) {
const imageRect = {width: image.width, height: image.height};
const interRect = getRectIntersection(rect, imageRect);
if (interRect.width < rect.width || interRect.height < rect.height) {
throw new Error(`Cannot crop ${JSON.stringify(rect)} from ${JSON.stringify(imageRect)} because the intersection between them was not the size of the rect`);
}
const firstVerticalPixel = interRect.top;
const lastVerticalPixel = interRect.top + interRect.height;
const firstHorizontalPixel = interRect.left;
const lastHorizontalPixel = interRect.left + interRect.width;
const croppedArray = [];
for (let y = firstVerticalPixel; y < lastVerticalPixel; y++) {
for (let x = firstHorizontalPixel; x < lastHorizontalPixel; x++) {
const firstByteIdxInPixelBlock = (imageRect.width * y + x) << 2;
for (let byteIdx = 0; byteIdx < BYTES_IN_PIXEL_BLOCK; byteIdx++) {
croppedArray.push(image.data[firstByteIdxInPixelBlock + byteIdx]);
}
}
}
image.data = Buffer.from(croppedArray);
image.width = interRect.width;
image.height = interRect.height;
return image;
}
function getRectIntersection (rect, imageSize) {
const left = rect.left >= imageSize.width ? imageSize.width : rect.left;
const top = rect.top >= imageSize.height ? imageSize.height : rect.top;
const width = imageSize.width >= (left + rect.width) ? rect.width : (imageSize.width - left);
const height = imageSize.height >= (top + rect.height) ? rect.height : (imageSize.height - top);
return {left, top, width, height};
}
export {
cropBase64Image, base64ToImage, imageToBase64, cropImage, getImagesMatches,
getImagesSimilarity, getImageOccurrence, getJimpImage, MIME_JPEG, MIME_PNG,
MIME_BMP,
};
+167
View File
@@ -0,0 +1,167 @@
import fs from './fs';
import _ from 'lodash';
const DEFAULT_REPLACER = '**SECURE**';
/**
* @typedef {Object} SecureValuePreprocessingRule
* @property {RegExp} pattern The parsed pattern which is going to be used for replacement
* @property {string} replacer [DEFAULT_SECURE_REPLACER] The replacer value to use. By default
* equals to `DEFAULT_SECURE_REPLACER`
*/
class SecureValuesPreprocessor {
constructor () {
this._rules = [];
}
/**
* @returns {Array<SecureValuePreprocessingRule>} The list of successfully
* parsed preprocessing rules
*/
get rules () {
return this._rules;
}
/**
* @typedef {Object} Rule
* @property {string} pattern A valid RegExp pattern to be replaced
* @property {string} text A text match to replace. Either this property or the
* above one must be provided. `pattern` has priority over `text` if both are provided.
* @property {string} flags ['g'] Regular expression flags for the given pattern.
* Supported flag are the same as for the standard JavaScript RegExp constructor:
* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#Advanced_searching_with_flags_2
* The 'g' (global matching) is always enabled though.
* @property {string} replacer [DEFAULT_SECURE_REPLACER] The replacer value to use. By default
* equals to `DEFAULT_SECURE_REPLACER`
*/
/**
* Parses single rule from the given JSON file
*
* @param {string|Rule} rule The rule might either be represented as a single string
* or a configuration object
* @throws {Error} If there was an error while parsing the rule
* @returns {SecureValuePreprocessingRule} The parsed rule
*/
parseRule (rule) {
const raiseError = (msg) => {
throw new Error(`${JSON.stringify(rule)} -> ${msg}`);
};
let pattern;
let replacer = DEFAULT_REPLACER;
let flags = ['g'];
if (_.isString(rule)) {
if (rule.length === 0) {
raiseError('The value must not be empty');
}
pattern = `\\b${_.escapeRegExp(rule)}\\b`;
} else if (_.isPlainObject(rule)) {
if (_.has(rule, 'pattern')) {
if (!_.isString(rule.pattern) || rule.pattern.length === 0) {
raiseError(`The value of 'pattern' must be a valid non-empty string`);
}
pattern = rule.pattern;
} else if (_.has(rule, 'text')) {
if (!_.isString(rule.text) || rule.text.length === 0) {
raiseError(`The value of 'text' must be a valid non-empty string`);
}
pattern = `\\b${_.escapeRegExp(rule.text)}\\b`;
}
if (!pattern) {
raiseError(`Must either have a field named 'pattern' or 'text'`);
}
if (_.has(rule, 'flags')) {
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#Advanced_searching_with_flags_2
for (const flag of ['i', 'g', 'm', 's', 'u', 'y']) {
if (_.includes(rule.flags, flag)) {
flags.push(flag);
}
}
flags = _.uniq(flags);
}
if (_.isString(rule.replacer)) {
replacer = rule.replacer;
}
} else {
raiseError('Must either be a string or an object');
}
try {
return {
pattern: new RegExp(pattern, flags.join('')),
replacer,
};
} catch (e) {
raiseError(e.message);
}
}
/**
* Loads rules from the given JSON file
*
* @param {string|Array<string|Rule>} source The full path to the JSON file containing secure
* values replacement rules or the rules themselves represented as an array
* @throws {Error} If the format of the source file is invalid or
* it does not exist
* @returns {Array<string>} The list of issues found while parsing each rule.
* An empty list is returned if no rule parsing issues were found
*/
async loadRules (source) {
let rules;
if (_.isArray(source)) {
rules = source;
} else {
if (!await fs.exists(source)) {
throw new Error(`'${source}' does not exist or is not accessible`);
}
try {
rules = JSON.parse(await fs.readFile(source, 'utf8'));
} catch (e) {
throw new Error(`'${source}' must be a valid JSON file. Original error: ${e.message}`);
}
if (!_.isArray(rules)) {
throw new Error(`'${source}' must contain a valid JSON array`);
}
}
const issues = [];
this._rules = [];
for (const rule of rules) {
try {
this._rules.push(this.parseRule(rule));
} catch (e) {
issues.push(e.message);
}
}
return issues;
}
/**
* Performs secure values replacement inside the given string
* according to the previously loaded rules. No replacement is made
* if there are no rules or the given value is not a string
*
* @param {string} str The string to make replacements in
* @returns {string} The string with replacements made
*/
preprocess (str) {
if (this._rules.length === 0 || !_.isString(str)) {
return str;
}
let result = str;
for (const rule of this._rules) {
result = result.replace(rule.pattern, rule.replacer);
}
return result;
}
}
const SECURE_VALUES_PREPROCESSOR = new SecureValuesPreprocessor();
export { SECURE_VALUES_PREPROCESSOR, SecureValuesPreprocessor };
export default SECURE_VALUES_PREPROCESSOR;
+5
View File
@@ -0,0 +1,5 @@
import { getLogger } from './logging';
let log = getLogger('Support');
export default log;
+139
View File
@@ -0,0 +1,139 @@
import npmlog from 'npmlog';
import _ from 'lodash';
import { unleakString } from './util';
import moment from 'moment';
import SECURE_VALUES_PREPROCESSOR from './log-internal';
// levels that are available from `npmlog`
const NPM_LEVELS = ['silly', 'verbose', 'debug', 'info', 'http', 'warn', 'error'];
const MAX_LOG_RECORDS_COUNT = 3000;
const PREFIX_TIMESTAMP_FORMAT = 'HH-mm-ss:SSS';
// mock log object used in testing mode
let mockLog = {};
for (let level of NPM_LEVELS) {
mockLog[level] = () => {};
}
function patchLogger (logger) {
if (!logger.debug) {
logger.addLevel('debug', 1000, { fg: 'blue', bg: 'black' }, 'dbug');
}
}
function _getLogger () {
// check if the user set the `_TESTING` or `_FORCE_LOGS` flag
const testingMode = parseInt(process.env._TESTING, 10) === 1;
const forceLogMode = parseInt(process.env._FORCE_LOGS, 10) === 1;
// if is possible that there is a logger instance that is already around,
// in which case we want t o use that
const usingGlobalLog = !!global._global_npmlog;
let logger;
if (testingMode && !forceLogMode) {
// in testing mode, use a mock logger object that we can query
logger = mockLog;
} else {
// otherwise, either use the global, or a new `npmlog` object
logger = global._global_npmlog || npmlog;
// The default value is 10000, which causes excessive memory usage
logger.maxRecordSize = MAX_LOG_RECORDS_COUNT;
}
patchLogger(logger);
return [logger, usingGlobalLog];
}
function getActualPrefix (prefix, logTimestamp = false) {
let actualPrefix = _.isFunction(prefix) ? prefix() : prefix;
if (logTimestamp) {
actualPrefix = `[${moment().format(PREFIX_TIMESTAMP_FORMAT)}] ${actualPrefix}`;
}
return actualPrefix;
}
function getLogger (prefix = null) {
let [logger, usingGlobalLog] = _getLogger();
// wrap the logger so that we can catch and modify any logging
let wrappedLogger = {unwrap: () => logger};
// allow access to the level of the underlying logger
Object.defineProperty(wrappedLogger, 'level', {
get () {
return logger.level;
},
set (newValue) {
logger.level = newValue;
},
enumerable: true,
configurable: true
});
const logTimestamp = parseInt(process.env._LOG_TIMESTAMP, 10) === 1;
// add all the levels from `npmlog`, and map to the underlying logger
for (const level of NPM_LEVELS) {
wrappedLogger[level] = function (...args) {
const actualPrefix = getActualPrefix(prefix, logTimestamp);
for (const arg of args) {
const out = (_.isError(arg) && arg.stack) ? arg.stack : `${arg}`;
for (const line of out.split('\n')) {
// it is necessary to unleak each line because `split` call
// creates "views" to the original string as well as the `substring` one
const unleakedLine = unleakString(line);
logger[level](actualPrefix, SECURE_VALUES_PREPROCESSOR.preprocess(unleakedLine));
}
}
};
}
// add method to log an error, and throw it, for convenience
wrappedLogger.errorAndThrow = function (err) {
this.error(err);
// make sure we have an `Error` object. Wrap if necessary
throw (_.isError(err) ? err : new Error(unleakString(err)));
};
if (!usingGlobalLog) {
// if we're not using a global log specified from some top-level package,
// set the log level to a default of verbose. Otherwise, let the top-level
// package set the log level
wrappedLogger.level = 'verbose';
}
wrappedLogger.levels = NPM_LEVELS;
return wrappedLogger;
}
/**
* @typedef {Object} LoadResult
* @property {List<string>} issues The list of rule parsing issues (one item per rule).
* Rules with issues are skipped. An empty list is returned if no parsing issues exist.
* @property {List<SecureValuePreprocessingRule>} rules The list of successfully loaded
* replacement rules. The list could be empty if no rules were loaded.
*/
/**
* Loads the JSON file containing secure values replacement rules.
* This might be necessary to hide sensitive values that may possibly
* appear in Appium logs.
* Each call to this method replaces the previously loaded rules if any existed.
*
* @param {string} rulesJsonPath The full path to the JSON file containing
* the replacement rules. Each rule could either be a string to be replaced
* or an object with predefined properties. See the `Rule` type definition in
* `log-internals.js` to get more details on its format.
* @throws {Error} If the given file cannot be loaded
* @returns {LoadResult}
*/
async function loadSecureValuesPreprocessingRules (rulesJsonPath) {
const issues = await SECURE_VALUES_PREPROCESSOR.loadRules(rulesJsonPath);
return {
issues,
rules: _.cloneDeep(SECURE_VALUES_PREPROCESSOR.rules),
};
}
// export a default logger with no prefix
const log = getLogger();
export { log, patchLogger, getLogger, loadSecureValuesPreprocessingRules };
export default log;
+191
View File
@@ -0,0 +1,191 @@
import _ from 'lodash';
import log from './logger';
import B from 'bluebird';
import { getJimpImage, MIME_PNG } from './image-util';
import { Writable } from 'stream';
import { requirePackage } from './node';
import axios from 'axios';
// lazy load this, as it might not be available
let MJpegConsumer = null;
/**
* @throws {Error} If `mjpeg-consumer` module is not installed or cannot be loaded
*/
async function initMJpegConsumer () {
if (!MJpegConsumer) {
try {
MJpegConsumer = await requirePackage('mjpeg-consumer');
} catch (ign) {}
}
if (!MJpegConsumer) {
throw new Error('mjpeg-consumer module is required to use MJPEG-over-HTTP features. ' +
'Please install it first (npm i -g mjpeg-consumer) and restart Appium.');
}
}
// amount of time to wait for the first image in the stream
const MJPEG_SERVER_TIMEOUT_MS = 10000;
/** Class which stores the last bit of data streamed into it */
class MJpegStream extends Writable {
/**
* Create an MJpegStream
* @param {string} mJpegUrl - URL of MJPEG-over-HTTP stream
* @param {function} [errorHandler=noop] - additional function that will be
* called in the case of any errors.
* @param {object} [options={}] - Options to pass to the Writable constructor
*/
constructor (mJpegUrl, errorHandler = _.noop, options = {}) {
super(options);
this.errorHandler = errorHandler;
this.url = mJpegUrl;
this.clear();
}
/**
* Get the base64-encoded version of the JPEG
*
* @returns {?string} base64-encoded JPEG image data
* or `null` if no image can be parsed
*/
get lastChunkBase64 () {
return !_.isEmpty(this.lastChunk) && _.isBuffer(this.lastChunk)
? this.lastChunk.toString('base64')
: null;
}
/**
* Get the PNG version of the JPEG buffer
*
* @returns {?Buffer} PNG image data or `null` if no PNG
* image can be parsed
*/
async lastChunkPNG () {
if (_.isEmpty(this.lastChunk) || !_.isBuffer(this.lastChunk)) {
return null;
}
try {
const jpg = await getJimpImage(this.lastChunk);
return await jpg.getBuffer(MIME_PNG);
} catch (e) {
return null;
}
}
/**
* Get the base64-encoded version of the PNG
*
* @returns {?string} base64-encoded PNG image data
* or `null` if no image can be parsed
*/
async lastChunkPNGBase64 () {
const png = await this.lastChunkPNG();
return png ? png.toString('base64') : null;
}
/**
* Reset internal state
*/
clear () {
this.registerStartSuccess = null;
this.registerStartFailure = null;
this.responseStream = null;
this.consumer = null;
this.lastChunk = null;
this.updateCount = 0;
}
/**
* Start reading the MJpeg stream and storing the last image
*/
async start (serverTimeout = MJPEG_SERVER_TIMEOUT_MS) {
// ensure we're not started already
this.stop();
await initMJpegConsumer();
this.consumer = new MJpegConsumer();
// use the deferred pattern so we can wait for the start of the stream
// based on what comes in from an external pipe
const startPromise = new B((res, rej) => {
this.registerStartSuccess = res;
this.registerStartFailure = rej;
})
// start a timeout so that if the server does not return data, we don't
// block forever.
.timeout(serverTimeout,
`Waited ${serverTimeout}ms but the MJPEG server never sent any images`);
const url = this.url;
const onErr = (err) => {
// Make sure we don't get an outdated screenshot if there was an error
this.lastChunk = null;
log.error(`Error getting MJpeg screenshot chunk: ${err.message}`);
this.errorHandler(err);
if (this.registerStartFailure) {
this.registerStartFailure(err);
}
};
const onClose = () => {
log.debug(`The connection to MJPEG server at ${url} has been closed`);
this.lastChunk = null;
};
try {
this.responseStream = (await axios({
url,
responseType: 'stream',
timeout: serverTimeout,
})).data;
} catch (e) {
return onErr(e);
}
this.responseStream
.once('close', onClose)
.on('error', onErr) // ensure we do something with errors
.pipe(this.consumer) // allow chunking and transforming of jpeg data
.pipe(this); // send the actual jpegs to ourself
await startPromise;
}
/**
* Stop reading the MJpeg stream. Ensure we disconnect all the pipes and stop
* the HTTP request itself. Then reset the state.
*/
stop () {
if (!this.consumer) {
return;
}
this.responseStream.unpipe(this.consumer);
this.consumer.unpipe(this);
this.responseStream.destroy();
this.clear();
}
/**
* Override the Writable write() method in order to save the last image and
* log the number of images we have received
* @override
* @param {Buffer} data - binary data streamed from the MJpeg consumer
*/
write (data) {
this.lastChunk = data;
this.updateCount++;
if (this.registerStartSuccess) {
this.registerStartSuccess();
this.registerStartSuccess = null;
}
}
}
export { MJpegStream };
+9
View File
@@ -0,0 +1,9 @@
import mkdirp from 'mkdirp';
/**
* TODO: once we drop support for Node 10, this should be removed in favor
* of fs.mkdir(dir, {recursive: true});
*/
export { mkdirp };
+262
View File
@@ -0,0 +1,262 @@
import _ from 'lodash';
import fs from './fs';
import url from 'url';
import B from 'bluebird';
import { toReadableSizeString } from './util';
import log from './logger';
import Ftp from 'jsftp';
import Timer from './timing';
import axios from 'axios';
import FormData from 'form-data';
function toAxiosAuth (auth) {
if (!_.isPlainObject(auth)) {
return null;
}
const axiosAuth = {
username: auth.username || auth.user,
password: auth.password || auth.pass,
};
return (axiosAuth.username && axiosAuth.password) ? axiosAuth : null;
}
async function uploadFileToHttp (localFileStream, parsedUri, uploadOptions = {}) {
const {
method = 'POST',
timeout = 5000,
headers,
auth,
fileFieldName = 'file',
formFields,
} = uploadOptions;
const { href } = parsedUri;
const requestOpts = {
url: href,
method,
timeout,
maxContentLength: Infinity,
maxBodyLength: Infinity,
};
const axiosAuth = toAxiosAuth(auth);
if (axiosAuth) {
requestOpts.auth = axiosAuth;
}
if (fileFieldName) {
const form = new FormData();
form.append(fileFieldName, localFileStream);
if (formFields) {
let pairs = [];
if (_.isArray(formFields)) {
pairs = formFields;
} else if (_.isPlainObject(formFields)) {
pairs = _.toPairs(formFields);
}
for (const [key, value] of pairs) {
if (_.toLower(key) !== _.toLower(fileFieldName)) {
form.append(key, value);
}
}
}
requestOpts.headers = Object.assign({}, _.isPlainObject(headers) ? headers : {},
form.getHeaders());
requestOpts.data = form;
} else {
if (_.isPlainObject(headers)) {
requestOpts.headers = headers;
}
requestOpts.data = localFileStream;
}
log.debug(`Performing ${method} to ${href} with options (excluding data): ` +
JSON.stringify(_.omit(requestOpts, ['data'])));
const {status, statusText} = await axios(requestOpts);
log.info(`Server response: ${status} ${statusText}`);
}
async function uploadFileToFtp (localFileStream, parsedUri, uploadOptions = {}) {
const {
auth,
user,
pass,
} = uploadOptions;
const {
hostname,
port,
protocol,
pathname,
} = parsedUri;
const ftpOpts = {
host: hostname,
port: port || 21,
};
if ((auth?.user && auth?.pass) || (user && pass)) {
ftpOpts.user = auth?.user || user;
ftpOpts.pass = auth?.pass || pass;
}
log.debug(`${protocol} upload options: ${JSON.stringify(ftpOpts)}`);
return await new B((resolve, reject) => {
new Ftp(ftpOpts).put(localFileStream, pathname, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
});
}
/**
* @typedef {Object} AuthCredentials
* @property {string} user - Non-empty user name
* @property {string} pass - Non-empty password
*/
/**
* @typedef {Object} FtpUploadOptions
* @property {boolean} isMetered [true] - Whether to log the actual upload performance
* (e.g. timings and speed)
* @property {AuthCredentials} auth
*/
/**
* @typedef {Object} HttpUploadOptions
* @property {boolean} isMetered [true] - Whether to log the actual upload performance
* (e.g. timings and speed)
* @property {string} method [POST] - The HTTP method used for file upload
* @property {AuthCredentials} auth
* @property {number} timeout [5000] - The actual request timeout in milliseconds
* @property {Object} headers - Additional request headers mapping
* @property {?string} fileFieldName [file] - The name of the form field containing the file
* content to be uploaded. Any falsy value make the request to use non-multipart upload
* @property {Array<Pair>|Object} formFields - The additional form fields
* to be included into the upload request. This property is only considered if
* `fileFieldName` is set
*/
/**
* Uploads the given file to a remote location. HTTP(S) and FTP
* protocols are supported.
*
* @param {string} localPath - The path to a file on the local storage.
* @param {string} remoteUri - The remote URI to upload the file to.
* @param {?FtpUploadOptions|HttpUploadOptions} uploadOptions
*/
async function uploadFile (localPath, remoteUri, uploadOptions = {}) {
if (!await fs.exists(localPath)) {
throw new Error (`'${localPath}' does not exists or is not accessible`);
}
const {
isMetered = true,
} = uploadOptions;
const parsedUri = url.parse(remoteUri);
const {size} = await fs.stat(localPath);
if (isMetered) {
log.info(`Uploading '${localPath}' of ${toReadableSizeString(size)} size to '${remoteUri}'`);
}
const timer = new Timer().start();
if (['http:', 'https:'].includes(parsedUri.protocol)) {
if (!uploadOptions.fileFieldName) {
uploadOptions.headers = Object.assign({},
_.isPlainObject(uploadOptions.headers) ? uploadOptions.headers : {},
{'Content-Length': size}
);
}
await uploadFileToHttp(fs.createReadStream(localPath), parsedUri, uploadOptions);
} else if (parsedUri.protocol === 'ftp:') {
await uploadFileToFtp(fs.createReadStream(localPath), parsedUri, uploadOptions);
} else {
throw new Error(`Cannot upload the file at '${localPath}' to '${remoteUri}'. ` +
`Unsupported remote protocol '${parsedUri.protocol}'. ` +
`Only http/https and ftp/ftps protocols are supported.`);
}
if (isMetered) {
log.info(`Uploaded '${localPath}' of ${toReadableSizeString(size)} size in ` +
`${timer.getDuration().asSeconds.toFixed(3)}s`);
}
}
/**
* @typedef {Object} DownloadOptions
* @property {boolean} isMetered [true] - Whether to log the actual download performance
* (e.g. timings and speed)
* @property {AuthCredentials} auth
* @property {number} timeout [5000] - The actual request timeout in milliseconds
* @property {Object} headers - Request headers mapping
*/
/**
* Downloads the given file via HTTP(S)
*
* @param {string} remoteUrl - The remote url
* @param {string} dstPath - The local path to download the file to
* @param {?DownloadOptions} downloadOptions
* @throws {Error} If download operation fails
*/
async function downloadFile (remoteUrl, dstPath, downloadOptions = {}) {
const {
isMetered = true,
auth,
timeout = 5000,
headers,
} = downloadOptions;
const requestOpts = {
url: remoteUrl,
responseType: 'stream',
timeout,
};
const axiosAuth = toAxiosAuth(auth);
if (axiosAuth) {
requestOpts.auth = axiosAuth;
}
if (_.isPlainObject(headers)) {
requestOpts.headers = headers;
}
const timer = new Timer().start();
let responseLength;
try {
const writer = fs.createWriteStream(dstPath);
const {
data: responseStream,
headers: responseHeaders,
} = await axios(requestOpts);
responseLength = parseInt(responseHeaders['content-length'], 10);
responseStream.pipe(writer);
await new B((resolve, reject) => {
responseStream.once('error', reject);
writer.once('finish', resolve);
writer.once('error', (e) => {
responseStream.unpipe(writer);
reject(e);
});
});
} catch (err) {
throw new Error(`Cannot download the file from ${remoteUrl}: ${err.message}`);
}
const {size} = await fs.stat(dstPath);
if (responseLength && size !== responseLength) {
await fs.rimraf(dstPath);
throw new Error(`The size of the file downloaded from ${remoteUrl} (${size} bytes) ` +
`differs from the one in Content-Length response header (${responseLength} bytes)`);
}
if (isMetered) {
const secondsElapsed = timer.getDuration().asSeconds;
log.debug(`${remoteUrl} (${toReadableSizeString(size)}) ` +
`has been downloaded to '${dstPath}' in ${secondsElapsed.toFixed(3)}s`);
if (secondsElapsed >= 2) {
const bytesPerSec = Math.floor(size / secondsElapsed);
log.debug(`Approximate download speed: ${toReadableSizeString(bytesPerSec)}/s`);
}
}
}
export { uploadFile, downloadFile };
+66
View File
@@ -0,0 +1,66 @@
import { isWindows } from './system';
import log from './logger';
import { exec } from 'teen_process';
import path from 'path';
/**
* Internal utility to link global package to local context
*
* @returns {string} - name of the package to link
* @throws {Error} If the command fails
*/
async function linkGlobalPackage (packageName) {
try {
log.debug(`Linking package '${packageName}'`);
const cmd = isWindows() ? 'npm.cmd' : 'npm';
await exec(cmd, ['link', packageName], {timeout: 20000});
} catch (err) {
const msg = `Unable to load package '${packageName}', linking failed: ${err.message}`;
log.debug(msg);
if (err.stderr) {
// log the stderr if there, but do not add to thrown error as it is
// _very_ verbose
log.debug(err.stderr);
}
throw new Error(msg);
}
}
/**
* Utility function to extend node functionality, allowing us to require
* modules that are installed globally. If the package cannot be required,
* this will attempt to link the package and then re-require it
*
* @param {string} packageName - the name of the package to be required
* @returns {object} - the package object
* @throws {Error} If the package is not found locally or globally
*/
async function requirePackage (packageName) {
// first, get it in the normal way (see https://nodejs.org/api/modules.html#modules_all_together)
try {
log.debug(`Loading local package '${packageName}'`);
return require(packageName);
} catch (err) {
log.debug(`Failed to load local package '${packageName}': ${err.message}`);
}
// second, get it from where it ought to be in the global node_modules
try {
const globalPackageName = path.resolve(process.env.npm_config_prefix, 'lib', 'node_modules', packageName);
log.debug(`Loading global package '${globalPackageName}'`);
return require(globalPackageName);
} catch (err) {
log.debug(`Failed to load global package '${packageName}': ${err.message}`);
}
// third, link the file and get locally
try {
await linkGlobalPackage(packageName);
log.debug(`Retrying load of linked package '${packageName}'`);
return require(packageName);
} catch (err) {
log.errorAndThrow(`Unable to load package '${packageName}': ${err.message}`);
}
}
export { requirePackage };
+164
View File
@@ -0,0 +1,164 @@
import xmlplist from 'plist';
import bplistCreate from 'bplist-creator';
import bplistParse from 'bplist-parser';
import fs from './fs';
import log from './logger';
import _ from 'lodash';
const BPLIST_IDENTIFIER = {
BUFFER: Buffer.from('bplist00'),
TEXT: 'bplist00'
};
const PLIST_IDENTIFIER = {
BUFFER: Buffer.from('<'),
TEXT: '<'
};
// XML Plist library helper
async function parseXmlPlistFile (plistFilename) {
let xmlContent = await fs.readFile(plistFilename, 'utf8');
return xmlplist.parse(xmlContent);
}
/**
* Parses a file in xml or binary format of plist
* @param {string} plist The plist file path
* @param {boolean} mustExist If set to false, this method will return an empty object when the file doesn't exist
* @param {boolean} quiet If set to false, the plist path will be logged in debug level
* @returns {Object} parsed plist JS Object
*/
async function parsePlistFile (plist, mustExist = true, quiet = true) {
// handle nonexistant file
if (!await fs.exists(plist)) {
if (mustExist) {
log.errorAndThrow(`Plist file doesn't exist: '${plist}'`);
} else {
log.debug(`Plist file '${plist}' does not exist. Returning an empty plist.`);
return {};
}
}
let obj = {};
let type = 'binary';
try {
obj = await bplistParse.parseFile(plist);
if (obj.length) {
obj = obj[0];
} else {
throw new Error(`Binary file '${plist}'' appears to be empty`);
}
} catch (ign) {
try {
obj = await parseXmlPlistFile(plist);
type = 'xml';
} catch (err) {
log.errorAndThrow(`Could not parse plist file '${plist}' as XML: ${err.message}`);
}
}
if (!quiet) {
log.debug(`Parsed plist file '${plist}' as ${type}`);
}
return obj;
}
/**
* Updates a plist file with the given fields
* @param {string} plist The plist file path
* @param {Object} updatedFields The updated fields-value pairs
* @param {boolean} binary If set to false, the file will be created as a xml plist
* @param {boolean} mustExist If set to false, this method will update an empty plist
* @param {boolean} quiet If set to false, the plist path will be logged in debug level
*/
async function updatePlistFile (plist, updatedFields, binary = true, mustExist = true, quiet = true) {
let obj;
try {
obj = await parsePlistFile(plist, mustExist);
} catch (err) {
log.errorAndThrow(`Could not update plist: ${err.message}`);
}
_.extend(obj, updatedFields);
let newPlist = binary ? bplistCreate(obj) : xmlplist.build(obj);
try {
await fs.writeFile(plist, newPlist);
} catch (err) {
log.errorAndThrow(`Could not save plist: ${err.message}`);
}
if (!quiet) {
log.debug(`Wrote plist file '${plist}'`);
}
}
/**
* Creates a binary plist Buffer from an object
* @param {Object} data The object to be turned into a binary plist
* @returns {Buffer} plist in the form of a binary buffer
*/
function createBinaryPlist (data) {
return bplistCreate(data);
}
/**
* Parses a Buffer into an Object
* @param {Buffer} data The beffer of a binary plist
*/
function parseBinaryPlist (data) {
return bplistParse.parseBuffer(data);
}
function getXmlPlist (data) {
if (_.isString(data) && data.startsWith(PLIST_IDENTIFIER.TEXT)) {
return data;
}
if (_.isBuffer(data) && PLIST_IDENTIFIER.BUFFER.compare(data, 0, PLIST_IDENTIFIER.BUFFER.length) === 0) {
return data.toString();
}
return null;
}
function getBinaryPlist (data) {
if (_.isString(data) && data.startsWith(BPLIST_IDENTIFIER.TEXT)) {
return Buffer.from(data);
}
if (_.isBuffer(data) && BPLIST_IDENTIFIER.BUFFER.compare(data, 0, BPLIST_IDENTIFIER.BUFFER.length) === 0) {
return data;
}
return null;
}
/**
* Creates a plist from an object
* @param {Object} object The JS object to be turned into a plist
* @param {boolean} binary Set it to true for a binary plist
* @returns {string|Buffer} returns a buffer or a string in respect to the binary parameter
*/
function createPlist (object, binary = false) {
if (binary) {
return createBinaryPlist(object);
} else {
return xmlplist.build(object);
}
}
/**
* Parses an buffer or a string to a JS object a plist from an object
* @param {string|Buffer} data The plist in the form of string or Buffer
* @returns {Object} parsed plist JS Object
* @throws Will throw an error if the plist type is unknown
*/
function parsePlist (data) {
let textPlist = getXmlPlist(data);
if (textPlist) {
return xmlplist.parse(textPlist);
}
let binaryPlist = getBinaryPlist(data);
if (binaryPlist) {
return parseBinaryPlist(binaryPlist)[0];
}
throw new Error(`Unknown type of plist, data: ${data.toString()}`);
}
export { parsePlistFile, parsePlist, createPlist, updatePlistFile, createBinaryPlist, parseBinaryPlist };
+44
View File
@@ -0,0 +1,44 @@
import { exec } from 'teen_process';
/*
* Exit Status for pgrep and pkill (`man pkill`)
* 0. One or more processes matched the criteria.
* 1. No processes matched.
* 2. Syntax error in the command line.
* 3. Fatal error: out of memory etc.
*/
async function getProcessIds (appName) {
let pids;
try {
let {stdout} = await exec('pgrep', ['-x', appName]);
pids = stdout.trim().split('\n').map((pid) => parseInt(pid, 10));
} catch (err) {
if (parseInt(err.code, 10) !== 1) {
throw new Error(`Error getting process ids for app '${appName}': ${err.message}`);
}
pids = [];
}
return pids;
}
async function killProcess (appName, force = false) {
let pids = await getProcessIds(appName);
if (pids.length === 0) {
// the process is not running
return;
}
try {
let args = force ? ['-9'] : [];
args.push('-x', appName);
await exec('pkill', args);
} catch (err) {
if (parseInt(err.code, 10) !== 1) {
throw new Error(`Error killing app '${appName}' with pkill: ${err.message}`);
}
}
}
export { getProcessIds, killProcess };
+48
View File
@@ -0,0 +1,48 @@
import { exec } from 'teen_process';
import _ from 'lodash';
import os from 'os';
const VERSION_PATTERN = /^(\d+\.\d+)/m;
function isWindows () {
return os.type() === 'Windows_NT';
}
function isMac () {
return os.type() === 'Darwin';
}
function isLinux () {
return !isWindows() && !isMac();
}
function isOSWin64 () {
return process.arch === 'x64' || _.has(process.env, 'PROCESSOR_ARCHITEW6432');
}
async function arch () {
if (isLinux() || isMac()) {
let {stdout} = await exec('uname', ['-m']);
return stdout.trim() === 'i686' ? '32' : '64';
} else if (isWindows()) {
let is64 = this.isOSWin64();
return is64 ? '64' : '32';
}
}
async function macOsxVersion () {
let stdout;
try {
stdout = (await exec('sw_vers', ['-productVersion'])).stdout.trim();
} catch (err) {
throw new Error(`Could not detect Mac OS X Version: ${err}`);
}
const versionMatch = VERSION_PATTERN.exec(stdout);
if (!versionMatch) {
throw new Error(`Could not detect Mac OS X Version from sw_vers output: '${stdout}'`);
}
return versionMatch[1];
}
export { isWindows, isMac, isLinux, isOSWin64, arch, macOsxVersion };
+125
View File
@@ -0,0 +1,125 @@
/* This library is originated from temp.js at http://github.com/bruce/node-temp */
import fs from './fs';
import os from 'os';
import nodePath from 'path';
import cnst from 'constants';
import log from './logger';
const RDWR_EXCL = cnst.O_CREAT | cnst.O_TRUNC | cnst.O_RDWR | cnst.O_EXCL;
/**
* Generate a temporary directory in os.tempdir() or process.env.APPIUM_TMP_DIR.
* e.g.
* - No `process.env.APPIUM_TMP_DIR`: `/var/folders/34/2222sh8n27d6rcp7jqlkw8km0000gn/T/xxxxxxxx.yyyy`
* - With `process.env.APPIUM_TMP_DIR = '/path/to/root'`: `/path/to/root/xxxxxxxx.yyyy`
*
* @returns {string} A path to the temporary directory
*/
async function tempDir () {
const now = new Date();
const filePath = nodePath.join(process.env.APPIUM_TMP_DIR || os.tmpdir(),
[
now.getFullYear(), now.getMonth(), now.getDate(),
'-',
process.pid,
'-',
(Math.random() * 0x100000000 + 1).toString(36),
].join(''));
// creates a temp directory using the date and a random string
await fs.mkdir(filePath);
return filePath;
}
/**
* @typedef {Object} Affixes
* @property {string} prefix - prefix of the temp directory name
* @property {string} suffix - suffix of the temp directory name
*/
/**
* Generate a temporary directory in os.tempdir() or process.env.APPIUM_TMP_DIR
* with arbitrary prefix/suffix for the directory name.
*
* @param {string|Affixes} rawAffixes
* @param {?string} defaultPrefix
* @returns {string} A path to the temporary directory with rawAffixes and defaultPrefix
*/
async function path (rawAffixes, defaultPrefix) {
const affixes = parseAffixes(rawAffixes, defaultPrefix);
const name = `${affixes.prefix || ''}${affixes.suffix || ''}`;
const tempDirectory = await tempDir();
return nodePath.join(tempDirectory, name);
}
/**
* @typedef {Object} OpenedAffixes
* @property {string} path - The path to file
* @property {integer} fd - The file descriptor opened
*/
/**
* Generate a temporary directory in os.tempdir() or process.env.APPIUM_TMP_DIR
* with arbitrary prefix/suffix for the directory name and return it as open.
*
* @param {Affixes} affixes
* @returns {OpenedAffixes}
*/
async function open (affixes) {
const filePath = await path(affixes, 'f-');
try {
let fd = await fs.open(filePath, RDWR_EXCL, 0o600);
// opens the file in mode 384
return {path: filePath, fd};
} catch (err) {
log.errorAndThrow(err);
}
}
/**
*
* Returns prefix/suffix object
*
* @param {string|Affixes} rawAffixes
* @param {?string} defaultPrefix
* @returns {Affixes}
*/
function parseAffixes (rawAffixes, defaultPrefix) {
let affixes = {prefix: null, suffix: null};
if (rawAffixes) {
switch (typeof rawAffixes) {
case 'string':
affixes.prefix = rawAffixes;
break;
case 'object':
affixes = rawAffixes;
break;
default:
throw new Error(`Unknown affix declaration: ${affixes}`);
}
} else {
affixes.prefix = defaultPrefix;
}
return affixes;
}
const _static = tempDir();
/**
* Returns a new path to a temporary directory
*
* @returns {string} A new tempDir() if tempRootDirectory is not provided
*/
const openDir = tempDir;
/**
* Returns a path to a temporary directory whcih is defined as static in the same process
*
* @returns {string} A temp directory path whcih is defined as static in the same process
*/
async function staticDir () { // eslint-disable-line require-await
return _static;
}
export { open, path, openDir, staticDir };
+119
View File
@@ -0,0 +1,119 @@
import _ from 'lodash';
const NS_PER_S = 1e9;
const NS_PER_MS = 1e6;
/**
* Class representing a duration, encapsulating the number and units.
*/
class Duration {
constructor (nanos) {
this._nanos = nanos;
}
get nanos () {
return this._nanos;
}
/**
* Get the duration as nanoseconds
*
* @returns {number} The duration as nanoseconds
*/
get asNanoSeconds () {
return this.nanos;
}
/**
* Get the duration converted into milliseconds
*
* @returns {number} The duration as milliseconds
*/
get asMilliSeconds () {
return this.nanos / NS_PER_MS;
}
/**
* Get the duration converted into seconds
*
* @returns {number} The duration fas seconds
*/
get asSeconds () {
return this.nanos / NS_PER_S;
}
toString () {
// default to milliseconds, rounded
return this.asMilliSeconds.toFixed(0);
}
}
class Timer {
/**
* Creates a timer
*/
constructor () {
this._startTime = null;
}
get startTime () {
return this._startTime;
}
/**
* Start the timer
*
* @return {Timer} The current instance, for chaining
*/
start () {
if (!_.isNull(this.startTime)) {
throw new Error('Timer has already been started.');
}
// once Node 10 is no longer supported, this check can be removed
this._startTime = _.isFunction(process.hrtime.bigint)
? process.hrtime.bigint()
: process.hrtime();
return this;
}
/**
* Get the duration since the timer was started
*
* @return {Duration} the duration
*/
getDuration () {
if (_.isNull(this.startTime)) {
throw new Error(`Unable to get duration. Timer was not started`);
}
let nanoDuration;
if (_.isArray(this.startTime)) {
// startTime was created using process.hrtime()
const [seconds, nanos] = process.hrtime(this.startTime);
nanoDuration = seconds * NS_PER_S + nanos;
} else if (typeof this.startTime === 'bigint' && _.isFunction(process.hrtime.bigint)) {
// startTime was created using process.hrtime.bigint()
const endTime = process.hrtime.bigint();
// get the difference, and convert to number
nanoDuration = Number(endTime - this.startTime);
} else {
throw new Error(`Unable to get duration. Start time '${this.startTime}' cannot be parsed`);
}
return new Duration(nanoDuration);
}
toString () {
try {
return this.getDuration().toString();
} catch (err) {
return `<err: ${err.message}>`;
}
}
}
export { Timer, Duration };
export default Timer;
+520
View File
@@ -0,0 +1,520 @@
import B from 'bluebird';
import _ from 'lodash';
import os from 'os';
import path from 'path';
import fs from './fs';
import semver from 'semver';
import {
// https://www.npmjs.com/package/shell-quote
quote as shellQuote,
parse as shellParse,
} from 'shell-quote';
import pluralizeLib from 'pluralize';
import stream from 'stream';
import { Base64Encode } from 'base64-stream';
import {
// https://www.npmjs.com/package/uuid
v1 as uuidV1, v3 as uuidV3,
v4 as uuidV4, v5 as uuidV5
} from 'uuid';
import _lockfile from 'lockfile';
const W3C_WEB_ELEMENT_IDENTIFIER = 'element-6066-11e4-a52e-4f735466cecf';
const KiB = 1024;
const MiB = KiB * 1024;
const GiB = MiB * 1024;
export function hasContent (val) {
return _.isString(val) && val !== '';
}
// return true if the the value is not undefined, null, or NaN.
function hasValue (val) {
let hasVal = false;
// avoid incorrectly evaluating `0` as false
if (_.isNumber(val)) {
hasVal = !_.isNaN(val);
} else {
hasVal = !_.isUndefined(val) && !_.isNull(val);
}
return hasVal;
}
// escape spaces in string, for commandline calls
function escapeSpace (str) {
return str.split(/ /).join('\\ ');
}
function escapeSpecialChars (str, quoteEscape) {
if (typeof str !== 'string') {
return str;
}
if (typeof quoteEscape === 'undefined') {
quoteEscape = false;
}
str = str
.replace(/[\\]/g, '\\\\')
.replace(/[\/]/g, '\\/') // eslint-disable-line no-useless-escape
.replace(/[\b]/g, '\\b')
.replace(/[\f]/g, '\\f')
.replace(/[\n]/g, '\\n')
.replace(/[\r]/g, '\\r')
.replace(/[\t]/g, '\\t')
.replace(/[\"]/g, '\\"') // eslint-disable-line no-useless-escape
.replace(/\\'/g, "\\'");
if (quoteEscape) {
let re = new RegExp(quoteEscape, 'g');
str = str.replace(re, `\\${quoteEscape}`);
}
return str;
}
function localIp () {
let ip = _.chain(os.networkInterfaces())
.values()
.flatten()
.filter(function (val) {
return (val.family === 'IPv4' && val.internal === false);
})
.map('address')
.first()
.value();
return ip;
}
/*
* Creates a promise that is cancellable, and will timeout
* after `ms` delay
*/
function cancellableDelay (ms) {
let timer;
let resolve;
let reject;
const delay = new B.Promise((_resolve, _reject) => {
resolve = _resolve;
reject = _reject;
timer = setTimeout(function () {
resolve();
}, ms);
});
// override Bluebird's `cancel`, which does not work when using `await` on
// a promise, since `resolve`/`reject` are never called
delay.cancel = function () {
clearTimeout(timer);
reject(new B.CancellationError());
};
return delay;
}
function multiResolve (roots, ...args) {
return roots.map((root) => path.resolve(root, ...args));
}
/*
* Parses an object if possible. Otherwise returns the object without parsing.
*/
function safeJsonParse (obj) {
try {
return JSON.parse(obj);
} catch (ign) {
// ignore: this is not json parsable
return obj;
}
}
/*
* Stringifies the object passed in, converting Buffers into Strings for better
* display. This mimics JSON.stringify (see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify)
* except the `replacer` argument can only be a function.
*
* @param {object} obj - the object to be serialized
* @param {?function} replacer - function to transform the properties added to the
* serialized object
* @param {?number|string} space - used to insert white space into the output JSON
* string for readability purposes. Defaults to 2
* returns {string} - the JSON object serialized as a string
*/
function jsonStringify (obj, replacer, space = 2) {
// if no replacer is passed, or it is not a function, just use a pass-through
if (!_.isFunction(replacer)) {
replacer = (k, v) => v;
}
// Buffers cannot be serialized in a readable way
const bufferToJSON = Buffer.prototype.toJSON;
delete Buffer.prototype.toJSON;
try {
return JSON.stringify(obj, (key, value) => {
const updatedValue = Buffer.isBuffer(value)
? value.toString('utf8')
: value;
return replacer(key, updatedValue);
}, space);
} finally {
// restore the function, so as to not break further serialization
Buffer.prototype.toJSON = bufferToJSON;
}
}
/*
* Removes the wrapper from element, if it exists.
* { ELEMENT: 4 } becomes 4
* { element-6066-11e4-a52e-4f735466cecf: 5 } becomes 5
*/
function unwrapElement (el) {
for (const propName of [W3C_WEB_ELEMENT_IDENTIFIER, 'ELEMENT']) {
if (_.has(el, propName)) {
return el[propName];
}
}
return el;
}
function wrapElement (elementId) {
return {
ELEMENT: elementId,
[W3C_WEB_ELEMENT_IDENTIFIER]: elementId,
};
}
/*
* Returns object consisting of all properties in the original element
* which were truthy given the predicate.
* If the predicate is
* * missing - it will remove all properties whose values are `undefined`
* * a scalar - it will test all properties' values against that value
* * a function - it will pass each value and the original object into the function
*/
function filterObject (obj, predicate) {
let newObj = _.clone(obj);
if (_.isUndefined(predicate)) {
// remove any element from the object whose value is undefined
predicate = (v) => !_.isUndefined(v);
} else if (!_.isFunction(predicate)) {
// make predicate into a function
const valuePredicate = predicate;
predicate = (v) => v === valuePredicate;
}
for (const key of Object.keys(obj)) {
if (!predicate(obj[key], obj)) {
delete newObj[key];
}
}
return newObj;
}
/**
* Converts number of bytes to a readable size string.
*
* @param {number|string} bytes - The actual number of bytes.
* @returns {string} The actual string representation, for example
* '1.00 KB' for '1024 B'
* @throws {Error} If bytes count cannot be converted to an integer or
* if it is less than zero.
*/
function toReadableSizeString (bytes) {
const intBytes = parseInt(bytes, 10);
if (isNaN(intBytes) || intBytes < 0) {
throw new Error(`Cannot convert '${bytes}' to a readable size format`);
}
if (intBytes >= GiB) {
return `${parseFloat(intBytes / (GiB * 1.0)).toFixed(2)} GB`;
} else if (intBytes >= MiB) {
return `${parseFloat(intBytes / (MiB * 1.0)).toFixed(2)} MB`;
} else if (intBytes >= KiB) {
return `${parseFloat(intBytes / (KiB * 1.0)).toFixed(2)} KB`;
}
return `${intBytes} B`;
}
/**
* Checks whether the given path is a subpath of the
* particular root folder. Both paths can include .. and . specifiers
*
* @param {string} originalPath The absolute file/folder path
* @param {string} root The absolute root folder path
* @param {?boolean} forcePosix Set it to true if paths must be interpreted in POSIX format
* @returns {boolean} true if the given original path is the subpath of the root folder
* @throws {Error} if any of the given paths is not absolute
*/
function isSubPath (originalPath, root, forcePosix = null) {
const pathObj = forcePosix ? path.posix : path;
for (const p of [originalPath, root]) {
if (!pathObj.isAbsolute(p)) {
throw new Error(`'${p}' is expected to be an absolute path`);
}
}
const normalizedRoot = pathObj.normalize(root);
const normalizedPath = pathObj.normalize(originalPath);
return normalizedPath.startsWith(normalizedRoot);
}
/**
* Checks whether the given paths are pointing to the same file system
* destination.
*
* @param {string} path1 - Absolute or relative path to a file/folder
* @param {string} path2 - Absolute or relative path to a file/folder
* @param {...string} pathN - Zero or more absolute or relative paths to files/folders
* @returns {boolean} true if all paths are pointing to the same file system item
*/
async function isSameDestination (path1, path2, ...pathN) {
const allPaths = [path1, path2, ...pathN];
if (!await B.reduce(allPaths, async (a, b) => a && await fs.exists(b), true)) {
return false;
}
const areAllItemsEqual = (arr) => !!arr.reduce((a, b) => a === b ? a : NaN);
if (areAllItemsEqual(allPaths)) {
return true;
}
// Node 10.5.0 introduced bigint support in stat, which allows for more precision
// however below that the options get interpreted as the callback
// TODO: remove when Node 10 is no longer supported
let mapCb = async (x) => await fs.stat(x, {
bigint: true,
}).ino;
if (semver.lt(process.version, '10.5.0')) {
mapCb = async (x) => await fs.stat(x).ino;
}
return areAllItemsEqual(await B.map(allPaths, mapCb));
}
/**
* Coerces the given number/string to a valid version string
*
* @param {string|number} ver - Version string to coerce
* @param {boolean} strict [true] - If true then an exception will be thrown
* if `ver` cannot be coerced
* @returns {string} Coerced version number or null if the string cannot be
* coerced and strict mode is disabled
* @throws {Error} if strict mode is enabled and `ver` cannot be coerced
*/
function coerceVersion (ver, strict = true) {
const result = semver.valid(semver.coerce(`${ver}`));
if (strict && !result) {
throw new Error(`'${ver}' cannot be coerced to a valid version number`);
}
return result;
}
const SUPPORTED_OPERATORS = ['==', '!=', '>', '<', '>=', '<=', '='];
/**
* Compares two version strings
*
* @param {string|number} ver1 - The first version number to compare. Should be a valid
* version number supported by semver parser.
* @param {string|number} ver2 - The second version number to compare. Should be a valid
* version number supported by semver parser.
* @param {string} operator - One of supported version number operators:
* ==, !=, >, <, <=, >=, =
* @returns {boolean} true or false depending on the actual comparison result
* @throws {Error} if an unsupported operator is supplied or any of the supplied
* version strings cannot be coerced
*/
function compareVersions (ver1, operator, ver2) {
if (!SUPPORTED_OPERATORS.includes(operator)) {
throw new Error(`The '${operator}' comparison operator is not supported. ` +
`Only '${JSON.stringify(SUPPORTED_OPERATORS)}' operators are supported`);
}
const semverOperator = ['==', '!='].includes(operator) ? '=' : operator;
const result = semver.satisfies(coerceVersion(ver1), `${semverOperator}${coerceVersion(ver2)}`);
return operator === '!=' ? !result : result;
}
/**
* Add appropriate quotes to command arguments. See https://github.com/substack/node-shell-quote
* for more details
*
* @param {string|Array<string>} - The arguments that will be parsed
* @returns {string} - The arguments, quoted
*/
function quote (args) {
return shellQuote(args);
}
/**
* This function is necessary to workaround unexpected memory leaks
* caused by NodeJS string interning
* behavior described in https://bugs.chromium.org/p/v8/issues/detail?id=2869
*
* @param {*} s - The string to unleak
* @return {string} Either the unleaked string or the original object converted to string
*/
function unleakString (s) {
return ` ${s}`.substr(1);
}
/**
* @typedef {Object} PluralizeOptions
* @property {?boolean} inclusive [false] - Whether to prefix with the number (e.g., 3 ducks)
*/
/**
* Get the form of a word appropriate to the count
*
* @param {string} word - The word to pluralize
* @param {number} count - How many of the word exist
* @param {?PluralizeOptions|boolean} options|inclusive - options for word pluralization,
* or a boolean indicating the options.inclusive property
* @returns {string} The word pluralized according to the number
*/
function pluralize (word, count, options = {}) {
let inclusive = false;
if (_.isBoolean(options)) {
// if passed in as a boolean
inclusive = options;
} else if (_.isBoolean(options?.inclusive)) {
// if passed in as an options hash
inclusive = options.inclusive;
}
return pluralizeLib(word, count, inclusive);
}
/**
* @typedef {Object} EncodingOptions
* @property {number} maxSize [1073741824] The maximum size of
* the resulting buffer in bytes. This is set to 1GB by default, because
* Appium limits the maximum HTTP body size to 1GB. Also, the NodeJS heap
* size must be enough to keep the resulting object (usually this size is
* limited to 1.4 GB)
*/
/**
* Converts contents of a local file to an in-memory base-64 encoded buffer.
* The operation is memory-usage friendly and should be used while encoding
* large files to base64
*
* @param {string} srcPath The full path to the file being encoded
* @param {EncodingOptions} opts
* @returns {Buffer} base64-encoded content of the source file as memory buffer
* @throws {Error} if there was an error while reading the source file
* or the source file is too
*/
async function toInMemoryBase64 (srcPath, opts = {}) {
if (!(await fs.exists(srcPath)) || (await fs.stat(srcPath)).isDirectory()) {
throw new Error(`No such file: ${srcPath}`);
}
const {
maxSize = 1 * GiB,
} = opts;
const resultBuffers = [];
let resultBuffersSize = 0;
const resultWriteStream = new stream.Writable({
write: (buffer, encoding, next) => {
resultBuffers.push(buffer);
resultBuffersSize += buffer.length;
if (maxSize > 0 && resultBuffersSize > maxSize) {
resultWriteStream.emit('error', new Error(`The size of the resulting ` +
`buffer must not be greater than ${toReadableSizeString(maxSize)}`));
}
next();
},
});
const readerStream = fs.createReadStream(srcPath);
const base64EncoderStream = new Base64Encode();
const resultWriteStreamPromise = new B((resolve, reject) => {
resultWriteStream.once('error', (e) => {
readerStream.unpipe(base64EncoderStream);
base64EncoderStream.unpipe(resultWriteStream);
readerStream.destroy();
reject(e);
});
resultWriteStream.once('finish', resolve);
});
const readStreamPromise = new B((resolve, reject) => {
readerStream.once('close', resolve);
readerStream.once('error', (e) => reject(
new Error(`Failed to read '${srcPath}': ${e.message}`)));
});
readerStream.pipe(base64EncoderStream);
base64EncoderStream.pipe(resultWriteStream);
await B.all([readStreamPromise, resultWriteStreamPromise]);
return Buffer.concat(resultBuffers);
}
/**
* @typedef {Object} LockFileOptions
* @property {number} timeout [120] The max time in seconds to wait for the lock
* @property {boolean} tryRecovery [false] Whether to try lock recovery if
* the first attempt to acquire it timed out.
*/
/**
* Create an async function which, when called, will not proceed until a certain file is no
* longer present on the system. This allows for preventing concurrent behavior across processes
* using a known lockfile path.
*
* @param {string} lockFile The full path to the file used for the lock
* @param {LockFileOptions} opts
* @returns {AsyncFunction} async function that takes another async function defining the locked
* behavior
*/
function getLockFileGuard (lockFile, opts = {}) {
const {
timeout = 120,
tryRecovery = false,
} = opts;
const lock = B.promisify(_lockfile.lock);
const check = B.promisify(_lockfile.check);
const unlock = B.promisify(_lockfile.unlock);
const guard = async (behavior) => {
let triedRecovery = false;
do {
try {
// if the lockfile doesn't exist, lock it synchronously to make sure no other call
// on the same spin of the event loop can also initiate a lock. If the lockfile does exist
// then just use the regular async 'lock' method which will wait on the lock.
if (_lockfile.checkSync(lockFile)) {
await lock(lockFile, {wait: timeout * 1000});
} else {
_lockfile.lockSync(lockFile);
}
break;
} catch (e) {
if (_.includes(e.message, 'EEXIST') && tryRecovery && !triedRecovery) {
// There could be cases where a process has been forcefully terminated
// without a chance to clean up pending locks: https://github.com/npm/lockfile/issues/26
_lockfile.unlockSync(lockFile);
triedRecovery = true;
continue;
}
throw new Error(`Could not acquire lock on '${lockFile}' after ${timeout}s. ` +
`Original error: ${e.message}`);
}
// eslint-disable-next-line no-constant-condition
} while (true);
try {
return await behavior();
} finally {
// whether the behavior succeeded or not, get rid of the lock
await unlock(lockFile);
}
};
guard.check = async () => await check(lockFile);
return guard;
}
export {
hasValue, escapeSpace, escapeSpecialChars, localIp, cancellableDelay,
multiResolve, safeJsonParse, wrapElement, unwrapElement, filterObject,
toReadableSizeString, isSubPath, W3C_WEB_ELEMENT_IDENTIFIER,
isSameDestination, compareVersions, coerceVersion, quote, unleakString,
jsonStringify, pluralize, GiB, MiB, KiB, toInMemoryBase64,
uuidV1, uuidV3, uuidV4, uuidV5, shellParse, getLockFileGuard
};
+485
View File
@@ -0,0 +1,485 @@
import _ from 'lodash';
import B from 'bluebird';
import yauzl from 'yauzl';
import archiver from 'archiver';
import { createWriteStream } from 'fs';
import path from 'path';
import { mkdirp } from '../lib/mkdirp';
import stream from 'stream';
import fs from './fs';
import { Base64Encode } from 'base64-stream';
import { toReadableSizeString, GiB } from './util';
import Timer from './timing';
import log from './logger';
import getStream from 'get-stream';
const openZip = B.promisify(yauzl.open);
const pipeline = B.promisify(stream.pipeline);
const ZIP_MAGIC = 'PK';
const IFMT = 61440;
const IFDIR = 16384;
const IFLNK = 40960;
// This class is mostly copied from https://github.com/maxogden/extract-zip/blob/master/index.js
class ZipExtractor {
constructor (sourcePath, opts = {}) {
this.zipPath = sourcePath;
this.opts = opts;
this.canceled = false;
}
extractFileName (entry) {
return _.isBuffer(entry.fileName) ? entry.fileName.toString(this.opts.fileNamesEncoding) : entry.fileName;
}
async extract () {
const {
dir,
fileNamesEncoding,
} = this.opts;
this.zipfile = await openZip(this.zipPath, {
lazyEntries: true,
// https://github.com/thejoshwolfe/yauzl/commit/cc7455ac789ba84973184e5ebde0581cdc4c3b39#diff-04c6e90faac2675aa89e2176d2eec7d8R95
decodeStrings: !fileNamesEncoding,
});
this.canceled = false;
return new B((resolve, reject) => {
this.zipfile.on('error', (err) => {
this.canceled = true;
reject(err);
});
this.zipfile.readEntry();
this.zipfile.on('close', () => {
if (!this.canceled) {
resolve();
}
});
this.zipfile.on('entry', async (entry) => {
if (this.canceled) {
return;
}
const fileName = this.extractFileName(entry);
if (fileName.startsWith('__MACOSX/')) {
this.zipfile.readEntry();
return;
}
const destDir = path.dirname(path.join(dir, fileName));
try {
await fs.mkdir(destDir, {recursive: true});
const canonicalDestDir = await fs.realpath(destDir);
const relativeDestDir = path.relative(dir, canonicalDestDir);
if (relativeDestDir.split(path.sep).includes('..')) {
new Error(`Out of bound path "${canonicalDestDir}" found while processing file ${fileName}`);
}
await this.extractEntry(entry);
this.zipfile.readEntry();
} catch (err) {
this.canceled = true;
this.zipfile.close();
reject(err);
}
});
});
}
async extractEntry (entry) {
if (this.canceled) {
return;
}
const {
dir,
} = this.opts;
const fileName = this.extractFileName(entry);
const dest = path.join(dir, fileName);
// convert external file attr int into a fs stat mode int
const mode = (entry.externalFileAttributes >> 16) & 0xFFFF;
// check if it's a symlink or dir (using stat mode constants)
const isSymlink = (mode & IFMT) === IFLNK;
const isDir = (mode & IFMT) === IFDIR
// Failsafe, borrowed from jsZip
|| fileName.endsWith('/')
// check for windows weird way of specifying a directory
// https://github.com/maxogden/extract-zip/issues/13#issuecomment-154494566
|| (entry.versionMadeBy >> 8 === 0 && entry.externalFileAttributes === 16);
const procMode = this.getExtractedMode(mode, isDir) & 0o777;
// always ensure folders are created
const destDir = isDir ? dest : path.dirname(dest);
const mkdirOptions = { recursive: true };
if (isDir) {
mkdirOptions.mode = procMode;
}
await fs.mkdir(destDir, mkdirOptions);
if (isDir) {
return;
}
const readStream = await B.promisify(this.zipfile.openReadStream.bind(this.zipfile))(entry);
if (isSymlink) {
const link = await getStream(readStream);
await fs.symlink(link, dest);
} else {
await pipeline(readStream, fs.createWriteStream(dest, { mode: procMode }));
}
}
getExtractedMode (entryMode, isDir) {
const {
defaultDirMode,
defaultFileMode,
} = this.opts;
let mode = entryMode;
// Set defaults, if necessary
if (mode === 0) {
if (isDir) {
if (defaultDirMode) {
mode = parseInt(defaultDirMode, 10);
}
if (!mode) {
mode = 0o755;
}
} else {
if (defaultFileMode) {
mode = parseInt(defaultFileMode, 10);
}
if (!mode) {
mode = 0o644;
}
}
}
return mode;
}
}
/**
* @typedef {Object} ExtractAllOptions
* @property {?string} fileNamesEncoding The encoding to use for extracted file names.
* For ZIP archives created on MacOS it is usually expected to be `utf8`.
* By default it is autodetected based on the entry metadata and is only needed to be set explicitly
* if the particular archive does not comply to the standards, which leads to corrupted file names
* after extraction.
* @property {?number} defaultDirMode [0o755] The default permissions for extracted folders. It is only
* applied when the extractor is unable to retrieve this value for a directory from the archive
* metadata.
* @property {?number} defaultFileMode [0o644] The default permissions for extracted files. It is only
* applied when the extractor is unable to retrieve this value for a file from the archive
* metadata.
*/
/**
* Extract zipfile to a directory
*
* @param {string} zipFilePath The full path to the source ZIP file
* @param {string} destDir The full path to the destination folder
* @param {?ExtractAllOptions} opts
*/
async function extractAllTo (zipFilePath, destDir, opts = {}) {
if (!path.isAbsolute(destDir)) {
throw new Error(`Target path '${destDir}' is expected to be absolute`);
}
await fs.mkdir(destDir, {recursive: true});
const extractor = new ZipExtractor(zipFilePath, {
...opts,
dir: await fs.realpath(destDir),
});
await extractor.extract();
}
/**
* Extract a single zip entry to a directory
*
* @param {Streamable} zipFile The source ZIP stream
* @param {yauzl.ZipEntry} entry The entry instance
* @param {string} destDir The full path to the destination folder
*/
async function _extractEntryTo (zipFile, entry, destDir) {
const dstPath = path.resolve(destDir, entry.fileName);
// Create dest directory if doesn't exist already
if (/\/$/.test(entry.fileName)) {
if (!await fs.exists(dstPath)) {
await mkdirp(dstPath);
}
return;
} else if (!await fs.exists(path.dirname(dstPath))) {
await mkdirp(path.dirname(dstPath));
}
// Create a write stream
const writeStream = createWriteStream(dstPath, {flags: 'w'});
const writeStreamPromise = new B((resolve, reject) => {
writeStream.once('finish', resolve);
writeStream.once('error', reject);
});
// Create zipReadStream and pipe data to the write stream
// (for some odd reason B.promisify doesn't work on zipfile.openReadStream, it causes an error 'closed')
const zipReadStream = await new B((resolve, reject) => {
zipFile.openReadStream(entry, (err, readStream) => err ? reject(err) : resolve(readStream));
});
const zipReadStreamPromise = new B((resolve, reject) => {
zipReadStream.once('end', resolve);
zipReadStream.once('error', reject);
});
zipReadStream.pipe(writeStream);
// Wait for the zipReadStream and writeStream to end before returning
return await B.all([
zipReadStreamPromise,
writeStreamPromise,
]);
}
/**
* @typedef {Object} ZipEntry
* @property {yauzl.ZipEntry} entry The actual entry instance
* @property {function} extractEntryTo An async function, which accepts one parameter.
* This parameter contains the destination folder path to which this function is going to extract the entry.
*/
/**
* Get entries for a zip folder
*
* @param {string} zipFilePath The full path to the source ZIP file
* @param {function} onEntry Callback when entry is read.
* The callback is expected to accept one argument of ZipEntry type.
* The iteration through the source zip file will bi terminated as soon as
* the result of this function equals to `false`.
*/
async function readEntries (zipFilePath, onEntry) {
// Open a zip file and start reading entries
const zipfile = await openZip(zipFilePath, {lazyEntries: true});
const zipReadStreamPromise = new B((resolve, reject) => {
zipfile.once('end', resolve);
zipfile.once('error', reject);
// On each entry, call 'onEntry' and then read the next entry
zipfile.on('entry', async (entry) => {
const res = await onEntry({
entry,
extractEntryTo: async (destDir) => await _extractEntryTo(zipfile, entry, destDir)
});
if (res === false) {
return zipfile.emit('end');
}
zipfile.readEntry();
});
});
zipfile.readEntry();
// Wait for the entries to finish being iterated through
return await zipReadStreamPromise;
}
/**
* @typedef {Object} ZipOptions
* @property {boolean} encodeToBase64 [false] Whether to encode
* the resulting archive to a base64-encoded string
* @property {boolean} isMetered [true] Whether to log the actual
* archiver performance
* @property {number} maxSize [1073741824] The maximum size of
* the resulting archive in bytes. This is set to 1GB by default, because
* Appium limits the maximum HTTP body size to 1GB. Also, the NodeJS heap
* size must be enough to keep the resulting object (usually this size is
* limited to 1.4 GB)
* @property {number} level [9] The compression level. The maximum
* level is 9 (the best compression, worst performance). The minimum
* compression level is 0 (no compression).
*/
/**
* Converts contents of local directory to an in-memory .zip buffer
*
* @param {string} srcPath The full path to the folder or file being zipped
* @param {ZipOptions} opts Zipping options
* @returns {Buffer} Zipped (and encoded if `encodeToBase64` is truthy)
* content of the source path as memory buffer
* @throws {Error} if there was an error while reading the source
* or the source is too big
*/
async function toInMemoryZip (srcPath, opts = {}) {
if (!await fs.exists(srcPath)) {
throw new Error(`No such file or folder: ${srcPath}`);
}
const {
isMetered = true,
encodeToBase64 = false,
maxSize = 1 * GiB,
level = 9,
} = opts;
const resultBuffers = [];
let resultBuffersSize = 0;
// Create a writable stream that zip buffers will be streamed to
const resultWriteStream = new stream.Writable({
write: (buffer, encoding, next) => {
resultBuffers.push(buffer);
resultBuffersSize += buffer.length;
if (maxSize > 0 && resultBuffersSize > maxSize) {
resultWriteStream.emit('error', new Error(`The size of the resulting ` +
`archive must not be greater than ${toReadableSizeString(maxSize)}`));
}
next();
},
});
// Zip 'srcDir' and stream it to the above writable stream
const archive = archiver('zip', {
zlib: {level}
});
let srcSize = null;
const base64EncoderStream = encodeToBase64 ? new Base64Encode() : null;
const resultWriteStreamPromise = new B((resolve, reject) => {
resultWriteStream.once('error', (e) => {
if (base64EncoderStream) {
archive.unpipe(base64EncoderStream);
base64EncoderStream.unpipe(resultWriteStream);
} else {
archive.unpipe(resultWriteStream);
}
archive.abort();
archive.destroy();
reject(e);
});
resultWriteStream.once('finish', () => {
srcSize = archive.pointer();
resolve();
});
});
const archiveStreamPromise = new B((resolve, reject) => {
archive.once('finish', resolve);
archive.once('error', (e) => reject(
new Error(`Failed to archive '${srcPath}': ${e.message}`)));
});
const timer = isMetered ? new Timer().start() : null;
if ((await fs.stat(srcPath)).isDirectory()) {
archive.directory(srcPath, false);
} else {
archive.file(srcPath, {
name: path.basename(srcPath),
});
}
if (base64EncoderStream) {
archive.pipe(base64EncoderStream);
base64EncoderStream.pipe(resultWriteStream);
} else {
archive.pipe(resultWriteStream);
}
archive.finalize();
// Wait for the streams to finish
await B.all([archiveStreamPromise, resultWriteStreamPromise]);
if (timer) {
log.debug(`Zipped ${encodeToBase64 ? 'and base64-encoded ' : ''}` +
`'${path.basename(srcPath)}' ` +
(srcSize ? `(${toReadableSizeString(srcSize)}) ` : '') +
`in ${timer.getDuration().asSeconds.toFixed(3)}s ` +
`(compression level: ${level})`);
}
// Return the array of zip buffers concatenated into one buffer
return Buffer.concat(resultBuffers);
}
/**
* Verifies whether the given file is a valid ZIP archive
*
* @param {string} filePath - Full path to the file
* @throws {Error} If the file does not exist or is not a valid ZIP archive
*/
async function assertValidZip (filePath) {
if (!await fs.exists(filePath)) {
throw new Error(`The file at '${filePath}' does not exist`);
}
const {size} = await fs.stat(filePath);
if (size < 4) {
throw new Error(`The file at '${filePath}' is too small to be a ZIP archive`);
}
const fd = await fs.open(filePath, 'r');
try {
const buffer = Buffer.alloc(ZIP_MAGIC.length);
await fs.read(fd, buffer, 0, ZIP_MAGIC.length, 0);
const signature = buffer.toString('ascii');
if (signature !== ZIP_MAGIC) {
throw new Error(`The file signature '${signature}' of '${filePath}' ` +
`is not equal to the expected ZIP archive signature '${ZIP_MAGIC}'`);
}
return true;
} finally {
await fs.close(fd);
}
}
/**
* @typedef {Object} ZipCompressionOptions
* @property {number} level [9] - Compression level in range 0..9
* (greater numbers mean better compression, but longer processing time)
*/
/**
* @typedef {Object} ZipSourceOptions
* @property {!string} pattern ['**\/*'] - GLOB pattern for compression
* @property {!string} cwd - The source root folder (the parent folder of
* the destination file by default)
* @property {?Array<string>} ignore - The list of ignored patterns
*/
/**
* Creates an archive based on the given glob pattern
*
* @param {string} dstPath - The resulting archive path
* @param {ZipSourceOptions} src - Source options
* @param {ZipCompressionOptions} opts - Compression options
* @throws {Error} If there was an error while creating the archive
*/
async function toArchive (dstPath, src = {}, opts = {}) {
const {
level = 9,
} = opts;
const {
pattern = '**/*',
cwd = path.dirname(dstPath),
ignore = [],
} = src;
const archive = archiver('zip', { zlib: { level }});
const stream = fs.createWriteStream(dstPath);
return await new B((resolve, reject) => {
archive
.glob(pattern, {
cwd,
ignore,
})
.on('error', reject)
.pipe(stream);
stream
.on('error', (e) => {
archive.unpipe(stream);
archive.abort();
archive.destroy();
reject(e);
})
.on('finish', resolve);
archive.finalize();
});
}
export { extractAllTo, readEntries, toInMemoryZip, _extractEntryTo,
assertValidZip, toArchive };
export default { extractAllTo, readEntries, toInMemoryZip, assertValidZip, toArchive };
+72
View File
@@ -0,0 +1,72 @@
{
"name": "@appium/support",
"version": "2.53.0",
"description": "Support libs used across appium packages",
"keywords": [
"appium"
],
"bugs": {
"url": "https://github.com/appium/appium/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/appium/appium.git"
},
"license": "Apache-2.0",
"author": "appium",
"main": "./build/index.js",
"bin": {},
"directories": {
"lib": "lib"
},
"files": [
"index.js",
"lib",
"build/index.js",
"build/lib"
],
"pre-commit": [
"precommit-msg",
"precommit-test"
],
"dependencies": {
"@babel/runtime": "^7.0.0",
"archiver": "^5.0.0",
"axios": "^0.x",
"base64-stream": "^1.0.0",
"bluebird": "^3.5.1",
"bplist-creator": "^0",
"bplist-parser": "^0.x",
"form-data": "^4.0.0",
"get-stream": "^6.0.0",
"glob": "^7.1.2",
"jimp": "^0.x",
"jsftp": "^2.1.2",
"klaw": "^3.0.0",
"lockfile": "^1.0.4",
"lodash": "^4.2.1",
"mkdirp": "^1.0.0",
"moment": "^2.24.0",
"mv": "^2.1.1",
"ncp": "^2.0.0",
"npmlog": "^4.1.2",
"plist": "^3.0.1",
"pluralize": "^8.0.0",
"pngjs": "^6.0.0",
"rimraf": "^3.0.0",
"sanitize-filename": "^1.6.1",
"semver": "^7.0.0",
"shell-quote": "^1.7.2",
"source-map-support": "^0.5.5",
"teen_process": "^1.5.1",
"uuid": "^8.0.0",
"which": "^2.0.0",
"yauzl": "^2.7.0"
},
"devDependencies": {
"@appium/gulp-plugins": "^5.4.0"
},
"engines": {
"node": ">=12"
}
}
+5
View File
@@ -0,0 +1,5 @@
{
"rules": {
"func-names": 0
}
}
Binary file not shown.
@@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.locationd.bundle-/System/Library/PrivateFrameworks/Parsec.framework</key>
<dict>
<key>Whitelisted</key>
<false/>
<key>Executable</key>
<string></string>
<key>BundlePath</key>
<string>/System/Library/PrivateFrameworks/Parsec.framework</string>
<key>Registered</key>
<string></string>
</dict>
<key>com.apple.locationd.bundle-/System/Library/PrivateFrameworks/WirelessDiagnostics.framework</key>
<dict>
<key>Whitelisted</key>
<false/>
<key>Executable</key>
<string></string>
<key>BundlePath</key>
<string>/System/Library/PrivateFrameworks/WirelessDiagnostics.framework</string>
<key>Registered</key>
<string></string>
</dict>
</dict>
</plist>
+201
View File
@@ -0,0 +1,201 @@
import { fs, tempDir } from '../index.js';
import chai from 'chai';
import path from 'path';
import { exec } from 'teen_process';
import B from 'bluebird';
import _ from 'lodash';
const should = chai.should();
const MOCHA_TIMEOUT = 20000;
describe('fs', function () {
this.timeout(MOCHA_TIMEOUT);
const existingPath = path.resolve(__dirname, 'fs-specs.js');
it('should exist', function () {
should.exist(fs);
});
it('should have expected methods', function () {
should.exist(fs.open);
should.exist(fs.close);
should.exist(fs.access);
should.exist(fs.mkdir);
should.exist(fs.readlink);
should.exist(fs.exists);
should.exist(fs.rimraf);
should.exist(fs.rimrafSync);
should.exist(fs.readFile);
should.exist(fs.writeFile);
should.exist(fs.lstat);
should.exist(fs.mv);
});
describe('mkdir', function () {
let dirName = path.resolve(__dirname, 'tmp');
it('should make a directory that does not exist', async function () {
await fs.rimraf(dirName);
await fs.mkdir(dirName);
let exists = await fs.hasAccess(dirName);
exists.should.be.true;
});
it('should not complain if the dir already exists', async function () {
let exists = await fs.hasAccess(dirName);
exists.should.be.true;
await fs.mkdir(dirName);
});
it('should still throw an error if something else goes wrong', async function () {
await fs.mkdir('/bin/foo').should.be.rejected;
});
});
it('hasAccess', async function () {
(await fs.exists(existingPath)).should.be.ok;
let nonExistingPath = path.resolve(__dirname, 'wrong-specs.js');
(await fs.hasAccess(nonExistingPath)).should.not.be.ok;
});
it('exists', async function () {
(await fs.exists(existingPath)).should.be.ok;
let nonExistingPath = path.resolve(__dirname, 'wrong-specs.js');
(await fs.exists(nonExistingPath)).should.not.be.ok;
});
it('readFile', async function () {
(await fs.readFile(existingPath, 'utf8')).should.contain('readFile');
});
describe('copyFile', function () {
it('should be able to copy a file', async function () {
let newPath = path.resolve(await tempDir.openDir(), 'fs-specs.js');
await fs.copyFile(existingPath, newPath);
(await fs.readFile(newPath, 'utf8')).should.contain('readFile');
});
it('should throw an error if the source does not exist', async function () {
await fs.copyFile('/sdfsdfsdfsdf', '/tmp/bla').should.eventually.be.rejected;
});
});
it('rimraf', async function () {
let newPath = path.resolve(await tempDir.openDir(), 'fs-specs.js');
await fs.copyFile(existingPath, newPath);
(await fs.exists(newPath)).should.be.true;
await fs.rimraf(newPath);
(await fs.exists(newPath)).should.be.false;
});
it('sanitizeName', function () {
fs.sanitizeName(':file?.txt', {
replacement: '-',
}).should.eql('-file-.txt');
});
it('rimrafSync', async function () {
let newPath = path.resolve(await tempDir.openDir(), 'fs-specs.js');
await fs.copyFile(existingPath, newPath);
(await fs.exists(newPath)).should.be.true;
fs.rimrafSync(newPath);
(await fs.exists(newPath)).should.be.false;
});
describe('md5', function () {
this.timeout(1200000);
let smallFilePath;
let bigFilePath;
before(async function () {
// get the path of a small file (this source file)
smallFilePath = existingPath;
// create a large file to test, about 163840000 bytes
bigFilePath = path.resolve(await tempDir.openDir(), 'enormous.txt');
let file = await fs.open(bigFilePath, 'w');
let fileData = '';
for (let i = 0; i < 4096; i++) {
fileData += '1';
}
for (let i = 0; i < 40000; i++) {
await fs.write(file, fileData);
}
await fs.close(file);
});
after(async function () {
await fs.unlink(bigFilePath);
});
it('should calculate hash of correct length', async function () {
(await fs.md5(smallFilePath)).should.have.length(32);
});
it('should be able to run on huge file', async function () {
(await fs.md5(bigFilePath)).should.have.length(32);
});
});
describe('hash', function () {
it('should calculate sha1 hash', async function () {
(await fs.hash(existingPath, 'sha1')).should.have.length(40);
});
it('should calculate md5 hash', async function () {
(await fs.hash(existingPath, 'md5')).should.have.length(32);
});
});
it('stat', async function () {
let stat = await fs.stat(existingPath);
stat.should.have.property('atime');
});
describe('which', function () {
it('should find correct executable', async function () {
let systemNpmPath = (await exec('which', ['npm'])).stdout.trim();
let npmPath = await fs.which('npm');
npmPath.should.equal(systemNpmPath);
});
it('should fail gracefully', async function () {
await fs.which('something_that_does_not_exist')
.should.eventually.be.rejected;
});
});
it('glob', async function () {
let glob = 'test/*-specs.js';
let tests = await fs.glob(glob);
tests.should.be.an('array');
tests.should.have.length.above(2);
});
describe('walkDir', function () {
it('walkDir recursive', async function () {
let inCallback = 0;
const filePath = await fs.walkDir(__dirname, true, async (item) => {
if (item.endsWith('logger/helpers.js')) {
++inCallback;
// This is to verify proper await functionality of the
// callback invocation inside the file system walker
await B.delay(500);
--inCallback;
return true;
}
});
inCallback.should.equal(0);
filePath.should.not.be.null;
});
it('should walk all elements recursive', async function () {
let inCallback = 0;
const filePath = await fs.walkDir(__dirname, true, async () => {
++inCallback;
await B.delay(500);
--inCallback;
});
inCallback.should.equal(0);
_.isNil(filePath).should.be.true;
});
it('should throw error through callback', async function () {
let processed = 0;
await chai.expect(fs.walkDir(__dirname, true,
() => {
++processed;
throw 'Callback error';
})).to.be.rejectedWith('Callback error');
processed.should.equal(1);
});
it('should traverse non-recursively', async function () {
const filePath = await fs.walkDir(__dirname, false, (item) => item.endsWith('logger/helpers.js'));
_.isNil(filePath).should.be.true;
});
});
});
+22
View File
@@ -0,0 +1,22 @@
import { EventEmitter } from 'events';
class MockReadWriteStream extends EventEmitter {
resume () {}
pause () {}
setEncoding () {}
flush () {}
write (msg) {
this.emit('data', msg);
}
end () {
this.emit('end');
this.emit('finish');
}
}
export { MockReadWriteStream };
@@ -0,0 +1,202 @@
import {
base64ToImage, imageToBase64, cropImage,
getImagesMatches, getImagesSimilarity, getImageOccurrence,
getJimpImage, MIME_PNG,
} from '../lib/image-util';
import path from 'path';
import _ from 'lodash';
import chai from 'chai';
import { fs } from '..';
import chaiAsPromised from 'chai-as-promised';
chai.use(chaiAsPromised);
chai.should();
const FIXTURES_ROOT = path.resolve(__dirname, '..', '..', 'test', 'images');
async function getImage (name) {
const imagePath = path.resolve(FIXTURES_ROOT, name);
return await fs.readFile(imagePath, 'utf8');
}
describe('image-util', function () {
before(function () {
// TODO: remove when opencv4nodejs is fixed
return this.skip();
});
describe('cropBase64Image', function () {
let originalImage = null;
before(async function () {
const originalImage64 = await getImage('full-image.b64');
originalImage = await base64ToImage(originalImage64);
// verify original image size, to be sure that original image is correct
originalImage.width.should.be.equal(640, 'unexpected width');
originalImage.height.should.be.equal(1136, 'unexpected height');
});
it('should verify that an image is cropped correctly', async function () {
const croppedImage = await cropImage(originalImage, {left: 35, top: 107, width: 323, height: 485});
// verify cropped image size, it should be less than original image according to crop region
croppedImage.width.should.be.equal(323, 'unexpected width');
croppedImage.height.should.be.equal(485, 'unexpected height');
// verify that image cropped, compare base64 representation
const croppedImageShouldBe = await getImage('cropped-image.b64');
const croppedImage64 = await imageToBase64(croppedImage);
croppedImage64.should.be.equal(croppedImageShouldBe);
});
});
describe('OpenCV helpers', function () {
// OpenCV needs several seconds for initialization
this.timeout(120000);
let imgFixture = null;
let fullImage = null;
let partialImage = null;
let originalImage = null;
let changedImage = null;
let rotatedImage = null;
let numberImage = null;
before(async function () {
const imagePath = path.resolve(FIXTURES_ROOT, 'full-image.b64');
imgFixture = Buffer.from(await fs.readFile(imagePath, 'binary'), 'base64');
fullImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'findwaldo.jpg'));
partialImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'waldo.jpg'));
originalImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'cc1.png'));
changedImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'cc2.png'));
numberImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'number5.png'));
rotatedImage = await fs.readFile(path.resolve(FIXTURES_ROOT, 'cc_rotated.png'));
});
describe('getImagesMatches', function () {
it('should calculate the number of matches between two images', async function () {
for (const detectorName of ['AKAZE', 'ORB']) {
const {count, totalCount} = await getImagesMatches(fullImage, fullImage, {detectorName});
count.should.be.above(0);
totalCount.should.eql(count);
}
});
it('should visualize matches between two images', async function () {
const {visualization} = await getImagesMatches(fullImage, fullImage, {visualize: true});
visualization.should.not.be.empty;
});
it('should visualize matches between two images and apply goodMatchesFactor', async function () {
const {visualization, points1, rect1, points2, rect2} = await getImagesMatches(rotatedImage, originalImage, {
visualize: true,
matchFunc: 'BruteForceHamming',
goodMatchesFactor: 40
});
visualization.should.not.be.empty;
points1.length.should.be.above(4);
rect1.x.should.be.above(0);
rect1.y.should.be.above(0);
rect1.width.should.be.above(0);
rect1.height.should.be.above(0);
points2.length.should.be.above(4);
rect2.x.should.be.above(0);
rect2.y.should.be.above(0);
rect2.width.should.be.above(0);
rect2.height.should.be.above(0);
});
});
describe('getImagesSimilarity', function () {
it('should calculate the similarity score between two images', async function () {
const {score} = await getImagesSimilarity(imgFixture, imgFixture);
score.should.be.above(0);
});
it('should visualize the similarity between two images', async function () {
const {visualization} = await getImagesSimilarity(originalImage, changedImage, {visualize: true});
visualization.should.not.be.empty;
});
});
describe('getImageOccurrence', function () {
it('should calculate the partial image position in the full image', async function () {
const {rect, score} = await getImageOccurrence(fullImage, partialImage);
rect.x.should.be.above(0);
rect.y.should.be.above(0);
rect.width.should.be.above(0);
rect.height.should.be.above(0);
score.should.be.above(0);
});
it('should reject matches that fall below a threshold', async function () {
await getImageOccurrence(fullImage, partialImage, {threshold: 1.0})
.should.eventually.be.rejectedWith(/threshold/);
});
it('should visualize the partial image position in the full image', async function () {
const {visualization} = await getImageOccurrence(fullImage, partialImage, {visualize: true});
visualization.should.not.be.empty;
});
describe('multiple', function () {
it('should return matches in the full image', async function () {
const { multiple } = await getImageOccurrence(originalImage, numberImage, {threshold: 0.8, multiple: true});
multiple.length.should.be.eq(3);
for (const result of multiple) {
result.rect.x.should.be.above(0);
result.rect.y.should.be.above(0);
result.rect.width.should.be.above(0);
result.rect.height.should.be.above(0);
result.score.should.be.above(0);
}
});
it('should reject matches that fall below a threshold', async function () {
await getImageOccurrence(originalImage, numberImage, {threshold: 1.0, multiple: true})
.should.eventually.be.rejectedWith(/threshold/);
});
it('should visualize the partial image position in the full image', async function () {
const { multiple } = await getImageOccurrence(originalImage, numberImage, {visualize: true, multiple: true});
for (const result of multiple) {
result.visualization.should.not.be.empty;
}
});
});
});
});
describe('Jimp helpers', function () {
it('should get a jimp object using image buffer', async function () {
const base64Image = await getImage('cropped-image.b64');
const imageBuffer = Buffer.from(base64Image, 'base64');
const jimpImg = await getJimpImage(imageBuffer);
jimpImg.hash().should.eql('80000000000');
jimpImg.bitmap.height.should.eql(485);
jimpImg.bitmap.width.should.eql(323);
});
it('should get a jimp object using b64 string', async function () {
const base64Image = await getImage('cropped-image.b64');
const jimpImg = await getJimpImage(base64Image);
jimpImg.hash().should.eql('80000000000');
jimpImg.bitmap.height.should.eql(485);
jimpImg.bitmap.width.should.eql(323);
});
it('should error with incorrect data type', async function () {
await getJimpImage(1234).should.eventually.be.rejectedWith(/string or buffer/);
});
it('should error with incorrect image data', async function () {
await getJimpImage('foo').should.eventually.be.rejectedWith(/Could not find MIME for Buffer/);
});
it('should get an image buffer via the overridden getBuffer method', async function () {
const base64Image = await getImage('cropped-image.b64');
const jimpImg = await getJimpImage(base64Image);
const buf = await jimpImg.getBuffer(MIME_PNG);
_.isBuffer(buf).should.be.true;
});
});
});
Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

File diff suppressed because one or more lines are too long
Binary file not shown.

After

Width:  |  Height:  |  Size: 557 KiB

File diff suppressed because one or more lines are too long
Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

+39
View File
@@ -0,0 +1,39 @@
import AppiumSupport from '../index.js';
import chai from 'chai';
chai.should();
let { system, tempDir, util } = AppiumSupport;
describe('index', function () {
describe('default', function () {
it('should expose an object', function () {
AppiumSupport.should.exist;
AppiumSupport.should.be.an.instanceof(Object);
});
it('should expose system object', function () {
AppiumSupport.system.should.exist;
AppiumSupport.system.should.be.an.instanceof(Object);
});
it('should expose tempDir object', function () {
AppiumSupport.tempDir.should.exist;
AppiumSupport.tempDir.should.be.an.instanceof(Object);
});
it('should expose util object', function () {
AppiumSupport.util.should.exist;
AppiumSupport.util.should.be.an.instanceof(Object);
});
});
it('should expose an object as "system" ', function () {
system.should.be.an.instanceof(Object);
});
it('should expose an object as "tempDir" ', function () {
tempDir.should.be.an.instanceof(Object);
});
it('should expose an object as "util" ', function () {
util.should.be.an.instanceof(Object);
});
});
@@ -0,0 +1,96 @@
import chai from 'chai';
import { fs } from '../index';
import os from 'os';
import path from 'path';
import { SecureValuesPreprocessor } from '../lib/log-internal';
const CONFIG_PATH = path.resolve(os.tmpdir(), 'rules.json');
chai.should();
describe('Log Internals', function () {
let preprocessor;
beforeEach(function () {
preprocessor = new SecureValuesPreprocessor();
});
it('should preprocess a string and make replacements', async function () {
const issues = await preprocessor.loadRules([
'yolo',
]);
issues.length.should.eql(0);
preprocessor.rules.length.should.eql(1);
const replacer = preprocessor.rules[0].replacer;
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(`:${replacer}" yo Yolo yyolo`);
});
it('should preprocess a string and make replacements with multiple simple rules', async function () {
const issues = await preprocessor.loadRules([
'yolo',
'yo',
]);
issues.length.should.eql(0);
preprocessor.rules.length.should.eql(2);
const replacer = preprocessor.rules[0].replacer;
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(`:${replacer}" ${replacer} Yolo yyolo`);
});
it('should preprocess a string and make replacements with multiple complex rules', async function () {
const replacer2 = '***';
const issues = await preprocessor.loadRules([
{ text: 'yolo', flags: 'i' },
{ pattern: '^:', replacer: replacer2 },
]);
issues.length.should.eql(0);
preprocessor.rules.length.should.eql(2);
const replacer = preprocessor.rules[0].replacer;
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(`${replacer2}${replacer}" yo ${replacer} yyolo`);
});
it(`should preprocess a string and apply a rule where 'pattern' has priority over 'text'`, async function () {
const replacer = '***';
const issues = await preprocessor.loadRules([
{ pattern: '^:', text: 'yo', replacer },
]);
issues.length.should.eql(0);
preprocessor.rules.length.should.eql(1);
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(`${replacer}yolo" yo Yolo yyolo`);
});
it('should preprocess a string and make replacements with multiple complex rules and issues', async function () {
const replacer2 = '***';
const issues = await preprocessor.loadRules([
{ text: 'yolo', flags: 'i' },
{ pattern: '^:(', replacer: replacer2 },
]);
issues.length.should.eql(1);
preprocessor.rules.length.should.eql(1);
const replacer = preprocessor.rules[0].replacer;
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(`:${replacer}" yo ${replacer} yyolo`);
});
it('should leave the string unchanged if all rules have issues', async function () {
const replacer2 = '***';
const issues = await preprocessor.loadRules([
null,
{ flags: 'i' },
{ pattern: '^:(', replacer: replacer2 },
]);
issues.length.should.eql(3);
preprocessor.rules.length.should.eql(0);
preprocessor.preprocess(':yolo" yo Yolo yyolo').should.eql(':yolo" yo Yolo yyolo');
});
it('should fail if rules cannot be accessed', async function () {
await preprocessor.loadRules('bla').should.eventually.be.rejected;
});
it('should fail if rules JSON cannot be parsed', async function () {
await fs.writeFile(CONFIG_PATH, 'blabla', 'utf8');
await preprocessor.loadRules(CONFIG_PATH).should.eventually.be.rejected;
});
});
+56
View File
@@ -0,0 +1,56 @@
import chai from 'chai';
import sinon from 'sinon';
import _ from 'lodash';
import { logger } from '../..';
chai.should();
function setupWriters () {
return {'stdout': sinon.spy(process.stdout, 'write'),
'stderr': sinon.spy(process.stderr, 'write')};
}
function getDynamicLogger (testingMode, forceLogs, prefix = null) {
process.env._TESTING = testingMode ? '1' : '0';
process.env._FORCE_LOGS = forceLogs ? '1' : '0';
return logger.getLogger(prefix);
}
function restoreWriters (writers) {
for (let w of _.values(writers)) {
w.restore();
}
}
function someoneHadOutput (writers, output) {
let hadOutput = false;
let matchOutput = sinon.match(function (value) {
return value && value.indexOf(output) >= 0;
}, 'matchOutput');
for (let writer of _.values(writers)) {
if (writer.calledWith) {
hadOutput = writer.calledWithMatch(matchOutput);
if (hadOutput) break; // eslint-disable-line curly
}
}
return hadOutput;
}
function assertOutputContains (writers, output) {
if (!someoneHadOutput(writers, output)) {
throw new Error(`Expected something to have been called with: '${output}'`);
}
}
function assertOutputDoesntContain (writers, output) {
if (someoneHadOutput(writers, output)) {
throw new Error(`Expected nothing to have been called with: '${output}'`);
}
}
export {
setupWriters, restoreWriters, assertOutputContains, assertOutputDoesntContain,
getDynamicLogger,
};
@@ -0,0 +1,37 @@
// transpile:mocha
import { getDynamicLogger, restoreWriters, setupWriters,
assertOutputContains } from './helpers';
describe('logger with force log', function () {
let writers, log;
before(function () {
writers = setupWriters();
log = getDynamicLogger(true, true);
log.level = 'silly';
});
after(function () {
restoreWriters(writers);
});
it('should not rewrite log levels even during testing', function () {
log.silly('silly');
assertOutputContains(writers, 'silly');
log.verbose('verbose');
assertOutputContains(writers, 'verbose');
log.verbose('debug');
assertOutputContains(writers, 'debug');
log.info('info');
assertOutputContains(writers, 'info');
log.http('http');
assertOutputContains(writers, 'http');
log.warn('warn');
assertOutputContains(writers, 'warn');
log.error('error');
assertOutputContains(writers, 'error');
(() => { log.errorAndThrow('msg'); }).should.throw('msg');
assertOutputContains(writers, 'error');
assertOutputContains(writers, 'msg');
});
});
@@ -0,0 +1,112 @@
// transpile:mocha
import { getDynamicLogger, restoreWriters, setupWriters,
assertOutputContains, assertOutputDoesntContain } from './helpers';
const LOG_LEVELS = ['silly', 'verbose', 'info', 'http', 'warn', 'error'];
describe('normal logger', function () {
let writers, log;
beforeEach(function () {
writers = setupWriters();
log = getDynamicLogger(false, false);
log.level = 'silly';
});
afterEach(function () {
restoreWriters(writers);
});
it('should not rewrite log levels outside of testing', function () {
for (const levelName of LOG_LEVELS) {
log[levelName](levelName);
assertOutputContains(writers, levelName);
}
});
it('throw should not rewrite log levels outside of testing and throw error', function () {
(() => { log.errorAndThrow('msg1'); }).should.throw('msg1');
(() => { log.errorAndThrow(new Error('msg2')); }).should.throw('msg2');
assertOutputContains(writers, 'msg1');
assertOutputContains(writers, 'msg2');
});
it('should get and set log levels', function () {
log.level = 'warn';
log.level.should.equal('warn');
log.info('information');
log.warn('warning');
assertOutputDoesntContain(writers, 'information');
assertOutputContains(writers, 'warning');
});
it('should split lines of multi-line logs', function () {
log.level = 'warn';
log.warn('this is one line\nand this is another');
assertOutputDoesntContain(writers, 'this is one line\nand this is another');
assertOutputContains(writers, 'this is one line');
assertOutputContains(writers, 'and this is another');
});
it('should split stack trace of Error', function () {
log.level = 'warn';
let error = new Error('this is an error');
error.stack = 'stack line 1\nstack line 2';
log.warn(error);
assertOutputDoesntContain(writers, 'stack line 1\nstack line 2');
assertOutputContains(writers, 'stack line 1');
assertOutputContains(writers, 'stack line 2');
});
});
describe('normal logger with static prefix', function () {
let writers, log;
const PREFIX = 'my_static_prefix';
before(function () {
writers = setupWriters();
log = getDynamicLogger(false, false, PREFIX);
log.level = 'silly';
});
after(function () {
restoreWriters(writers);
});
it('should not rewrite log levels outside of testing', function () {
for (const levelName of LOG_LEVELS) {
log[levelName](levelName);
assertOutputContains(writers, levelName);
assertOutputContains(writers, PREFIX);
}
});
it('throw should not rewrite log levels outside of testing and throw error', function () {
(() => { log.errorAndThrow('msg'); }).should.throw('msg');
assertOutputContains(writers, 'error');
assertOutputContains(writers, PREFIX);
});
});
describe('normal logger with dynamic prefix', function () {
let writers, log;
const PREFIX = 'my_dynamic_prefix';
before(function () {
writers = setupWriters();
log = getDynamicLogger(false, false, () => PREFIX);
log.level = 'silly';
});
after(function () {
restoreWriters(writers);
});
it('should not rewrite log levels outside of testing', function () {
for (const levelName of LOG_LEVELS) {
log[levelName](levelName);
assertOutputContains(writers, levelName);
assertOutputContains(writers, PREFIX);
}
});
it('throw should not rewrite log levels outside of testing and throw error', function () {
(() => { log.errorAndThrow('msg'); }).should.throw('msg');
assertOutputContains(writers, 'error');
assertOutputContains(writers, PREFIX);
});
});
@@ -0,0 +1,38 @@
// transpile:mocha
import { getDynamicLogger, restoreWriters, setupWriters,
assertOutputDoesntContain } from './helpers';
describe('test logger', function () {
let writers, log;
before(function () {
writers = setupWriters();
log = getDynamicLogger(true);
});
after(function () {
restoreWriters(writers);
});
it('should contains levels', function () {
log.levels.should.have.length.above(3);
log.levels[2].should.equal('debug');
});
it('should unwrap', function () {
log.unwrap.should.exist;
log.unwrap().should.exist;
});
it('should rewrite npmlog levels during testing', function () {
const text = 'hi';
log.silly(text);
log.verbose(text);
log.info(text);
log.http(text);
log.warn(text);
log.error(text);
(() => { log.errorAndThrow(text); }).should.throw(text);
assertOutputDoesntContain(writers, text);
});
});
+108
View File
@@ -0,0 +1,108 @@
import _ from 'lodash';
import { mjpeg } from '..';
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import B from 'bluebird';
import http from 'http';
import mJpegServer from 'mjpeg-server';
const {MJpegStream} = mjpeg;
const TEST_IMG_JPG = '/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAAeAAD/4QOBaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENlaGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzE0MCA3OS4xNjA0NTEsIDIwMTcvMDUvMDYtMDE6MDg6MjEgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9InhtcC5kaWQ6NGY5ODc1OTctZGE2My00Y2M0LTkzNDMtNGYyNjgzMGUwNjk3IiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjlDMzI3QkY0N0Q3NTExRThCMTlDOTVDMDc2RDE5MDY5IiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMzI3QkYzN0Q3NTExRThCMTlDOTVDMDc2RDE5MDY5IiB4bXA6Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE4IChNYWNpbnRvc2gpIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6NGY5ODc1OTctZGE2My00Y2M0LTkzNDMtNGYyNjgzMGUwNjk3IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjRmOTg3NTk3LWRhNjMtNGNjNC05MzQzLTRmMjY4MzBlMDY5NyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEABALCwsMCxAMDBAXDw0PFxsUEBAUGx8XFxcXFx8eFxoaGhoXHh4jJSclIx4vLzMzLy9AQEBAQEBAQEBAQEBAQEABEQ8PERMRFRISFRQRFBEUGhQWFhQaJhoaHBoaJjAjHh4eHiMwKy4nJycuKzU1MDA1NUBAP0BAQEBAQEBAQEBAQP/AABEIACAAIAMBIgACEQEDEQH/xABgAAEAAwEAAAAAAAAAAAAAAAAABAUHCAEBAAAAAAAAAAAAAAAAAAAAABAAAQMCAgsAAAAAAAAAAAAAAAECBBEDEgYhMRODo7PTVAUWNhEBAAAAAAAAAAAAAAAAAAAAAP/aAAwDAQACEQMRAD8Az8AAdAAAAAAI8+fE8dEuTZtzZR7VMb6OdTE5GJoYirrUp/e8qd9wb3TGe/lJ2551sx8D/9k=';
const should = chai.should();
chai.use(chaiAsPromised);
const MJPEG_SERVER_PORT = 8589;
const MJPEG_SERVER_URL = `http://localhost:${MJPEG_SERVER_PORT}`;
/**
* Start an mjpeg server for the purpose of testing, which just sends the same
* image over and over. Caller is responsible for closing the server.
* @param {int} port - port the server should listen on
* @param {int} [intMs] - how often the server should push an image
* @param {int} [times] - how many times the server should push an image before
* it closes the connection
* @returns {http.Server}
*/
function initMJpegServer (port, intMs = 300, times = 20) {
const server = http.createServer(async function (req, res) {
const mJpegReqHandler = mJpegServer.createReqHandler(req, res);
const jpg = Buffer.from(TEST_IMG_JPG, 'base64');
// just send the same jpeg over and over
for (let i = 0; i < times; i++) {
await B.delay(intMs);
mJpegReqHandler._write(jpg, null, _.noop);
}
mJpegReqHandler.close();
}).listen(port);
return server;
}
describe('MJpeg Stream (e2e)', function () {
let mJpegServer, stream;
before(async function () {
// TODO: remove when buffertools can handle v12
if (process.version.startsWith('v12')) {
return this.skip();
}
mJpegServer = await initMJpegServer(MJPEG_SERVER_PORT);
});
after(function () {
if (mJpegServer) {
mJpegServer.close();
}
if (stream) {
stream.stop(); // ensure streams are always stopped
}
});
it('should update mjpeg stream based on new data from mjpeg server', async function () {
stream = new MJpegStream(MJPEG_SERVER_URL, _.noop);
should.not.exist(stream.lastChunk);
await stream.start();
should.exist(stream.lastChunk);
stream.updateCount.should.eql(1);
await B.delay(1000); // let the stream update a bit
stream.updateCount.should.be.above(1);
// verify jpeg type and byte length of fixture image
const startBytes = Buffer.from([0xff, 0xd8]);
const endBytes = Buffer.from([0xff, 0xd9]);
const startPos = stream.lastChunk.indexOf(startBytes);
const endPos = stream.lastChunk.indexOf(endBytes);
startPos.should.eql(0); // proves we have a jpeg
endPos.should.eql(1278); // proves we have a jpeg of the right size
// verify we can get the base64 version too
const b64 = stream.lastChunkBase64;
b64.should.eql(TEST_IMG_JPG);
// verify we can get the PNG version too
const png = await stream.lastChunkPNGBase64();
png.should.be.a('string');
png.indexOf('iVBOR').should.eql(0);
png.length.should.be.above(400);
// now stop the stream and wait some more then assert no new data has come in
stream.stop();
await B.delay(1000);
should.not.exist(stream.lastChunk);
stream.updateCount.should.eql(0);
});
it('should error out if the server does not send any images before a timeout', async function () {
stream = new MJpegStream(MJPEG_SERVER_URL, _.noop);
await stream.start(0).should.eventually.be.rejectedWith(/never sent/);
});
});
+29
View File
@@ -0,0 +1,29 @@
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import path from 'path';
import { downloadFile } from '../lib/net';
import { tempDir, fs } from '../index';
chai.use(chaiAsPromised);
describe('#net', function () {
let tmpRoot;
beforeEach(async function () {
tmpRoot = await tempDir.openDir();
});
afterEach(async function () {
await fs.rimraf(tmpRoot);
});
describe('downloadFile()', function () {
it('should download file into the target folder', async function () {
const dstPath = path.join(tmpRoot, 'download.tmp');
await downloadFile('https://appium.io/ico/apple-touch-icon-114x114-precomposed.png',
dstPath);
await fs.exists(dstPath).should.eventually.be.true;
});
});
});
+21
View File
@@ -0,0 +1,21 @@
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import { node } from '..';
chai.should();
chai.use(chaiAsPromised);
describe('node utilities', function () {
describe('requirePackage', function () {
it('should be able to require a local package', async function () {
await node.requirePackage('chai').should.not.be.rejected;
});
it('should be able to require a global package', async function () {
await node.requirePackage('npm').should.not.be.rejected;
});
it('should fail to find uninstalled package', async function () {
await node.requirePackage('appium-foo-driver').should.eventually.be.rejectedWith(/Unable to load package/);
});
});
});
+51
View File
@@ -0,0 +1,51 @@
import chai from 'chai';
import path from 'path';
import { plist, tempDir, fs } from '../index.js';
chai.should();
const binaryPlistPath = path.resolve('test', 'assets', 'sample_binary.plist');
const textPlistPath = path.resolve('test', 'assets', 'sample_text.plist');
describe('plist', function () {
it('should parse plist file as binary', async function () {
let content = await plist.parsePlistFile(binaryPlistPath);
content.should.have.property('com.apple.locationd.bundle-/System/Library/PrivateFrameworks/Parsec.framework');
});
it(`should return an empty object if file doesn't exist and mustExist is set to false`, async function () {
let mustExist = false;
let content = await plist.parsePlistFile('doesntExist.plist', mustExist);
content.should.be.an('object');
content.should.be.empty;
});
it('should write plist file as binary', async function () {
// create a temporary file, to which we will write
let plistFile = path.resolve(await tempDir.openDir(), 'sample.plist');
await fs.copyFile(binaryPlistPath, plistFile);
// write some data
let updatedFields = {
'io.appium.test': true
};
await plist.updatePlistFile(plistFile, updatedFields, true);
// make sure the data is there
let content = await plist.parsePlistFile(plistFile);
content.should.have.property('io.appium.test');
});
it('should read binary plist', async function () {
const content = await fs.readFile(binaryPlistPath);
const object = plist.parsePlist(content);
object.should.have.property('com.apple.locationd.bundle-/System/Library/PrivateFrameworks/Parsec.framework');
});
it('should read text plist', async function () {
const content = await fs.readFile(textPlistPath);
const object = plist.parsePlist(content);
object.should.have.property('com.apple.locationd.bundle-/System/Library/PrivateFrameworks/Parsec.framework');
});
});
+93
View File
@@ -0,0 +1,93 @@
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import * as teenProcess from 'teen_process';
import sinon from 'sinon';
import { process } from '../index.js';
import { retryInterval } from 'asyncbox';
chai.should();
chai.use(chaiAsPromised);
const SubProcess = teenProcess.SubProcess;
describe('process', function () {
describe('getProcessIds', function () {
let proc;
before(async function () {
proc = new SubProcess('tail', ['-f', __filename]);
await proc.start();
});
after(async function () {
await proc.stop();
});
it('should get return an array for existing process', async function () {
let pids = await process.getProcessIds('tail');
pids.should.be.an.instanceof(Array);
});
it('should get process identifiers for existing process', async function () {
let pids = await process.getProcessIds('tail');
pids.should.have.length.at.least(1);
});
it('should get an empty array when the process does not exist', async function () {
let pids = await process.getProcessIds('sadfgasdfasdf');
pids.should.have.length(0);
});
it('should throw an error if pgrep fails', async function () {
let tpMock = sinon.mock(teenProcess);
tpMock.expects('exec').throws({message: 'Oops', code: 2});
await process.getProcessIds('tail').should.eventually.be.rejectedWith(/Oops/);
tpMock.restore();
});
});
describe('killProcess', function () {
let proc;
beforeEach(async function () {
proc = new SubProcess('tail', ['-f', __filename]);
await proc.start();
});
afterEach(async function () {
if (proc.isRunning) {
await proc.stop();
}
});
it('should kill process that is running', async function () {
proc.isRunning.should.be.true;
await process.killProcess('tail');
// it may take a moment to actually be registered as killed
await retryInterval(10, 100, async () => { // eslint-disable-line require-await
proc.isRunning.should.be.false;
});
});
it('should do nothing if the process does not exist', async function () {
proc.isRunning.should.be.true;
await process.killProcess('asdfasdfasdf');
await retryInterval(10, 100, async () => { // eslint-disable-line require-await
proc.isRunning.should.be.false;
}).should.eventually.be.rejected;
});
it('should throw an error if pgrep fails', async function () {
let tpMock = sinon.mock(teenProcess);
tpMock.expects('exec').throws({message: 'Oops', code: 2});
await process.killProcess('tail').should.eventually.be.rejectedWith(/Oops/);
tpMock.restore();
});
it('should throw an error if pkill fails', async function () {
let tpMock = sinon.mock(teenProcess);
tpMock.expects('exec').twice()
.onFirstCall().returns({stdout: '42\n'})
.onSecondCall().throws({message: 'Oops', code: 2});
await process.killProcess('tail').should.eventually.be.rejectedWith(/Oops/);
tpMock.restore();
});
});
});
+120
View File
@@ -0,0 +1,120 @@
import { system } from '../index.js';
import chai from 'chai';
import os from 'os';
import sinon from 'sinon';
import * as teen_process from 'teen_process';
import _ from 'lodash';
chai.should();
let sandbox, tpMock, osMock = null;
let SANDBOX = Symbol();
let mocks = {};
let libs = {teen_process, os, system};
describe('system', function () {
describe('isX functions', function () {
beforeEach(function () {
osMock = sinon.mock(os);
});
afterEach(function () {
osMock.verify();
});
it('should correctly return Windows System if it is a Windows', function () {
osMock.expects('type').returns('Windows_NT');
system.isWindows().should.be.true;
});
it('should correctly return Mac if it is a Mac', function () {
osMock.expects('type').returns('Darwin');
system.isMac().should.be.true;
});
it('should correctly return Linux if it is a Linux', function () {
osMock.expects('type').twice().returns('Linux');
system.isLinux().should.be.true;
});
});
describe('mac OSX version', function () {
beforeEach(function () {
tpMock = sinon.mock(teen_process);
});
afterEach(function () {
tpMock.verify();
});
it('should return correct version for 10.10.5', async function () {
tpMock.expects('exec').once().withExactArgs('sw_vers', ['-productVersion']).returns({stdout: '10.10.5'});
await system.macOsxVersion().should.eventually.equal('10.10');
});
it('should return correct version for 10.12', async function () {
tpMock.expects('exec').once().withExactArgs('sw_vers', ['-productVersion']).returns({stdout: '10.12.0'});
await system.macOsxVersion().should.eventually.equal('10.12');
});
it('should return correct version for 10.12 with newline', async function () {
tpMock.expects('exec').once().withExactArgs('sw_vers', ['-productVersion']).returns({stdout: '10.12 \n'});
await system.macOsxVersion().should.eventually.equal('10.12');
});
it("should throw an error if OSX version can't be determined", async function () {
let invalidOsx = 'error getting operation system version blabla';
tpMock.expects('exec').once().withExactArgs('sw_vers', ['-productVersion']).returns({stdout: invalidOsx});
await system.macOsxVersion().should.eventually.be.rejectedWith(new RegExp(_.escapeRegExp(invalidOsx)));
});
});
describe('architecture', function () {
beforeEach(function () {
sandbox = sinon.createSandbox();
mocks[SANDBOX] = sandbox;
for (let [key, value] of _.toPairs(libs)) {
mocks[key] = sandbox.mock(value);
}
});
afterEach(function () {
sandbox.restore();
});
it('should return correct architecture if it is a 64 bit Mac/Linux', async function () {
mocks.os.expects('type').thrice().returns('Darwin');
mocks.teen_process.expects('exec').once().withExactArgs('uname', ['-m']).returns({stdout: 'x86_64'});
let arch = await system.arch();
arch.should.equal('64');
mocks[SANDBOX].verify();
});
it('should return correct architecture if it is a 32 bit Mac/Linux', async function () {
mocks.os.expects('type').twice().returns('Linux');
mocks.teen_process.expects('exec').once().withExactArgs('uname', ['-m']).returns({stdout: 'i686'});
let arch = await system.arch();
arch.should.equal('32');
mocks[SANDBOX].verify();
});
it('should return correct architecture if it is a 64 bit Windows', async function () {
mocks.os.expects('type').thrice().returns('Windows_NT');
mocks.system.expects('isOSWin64').once().returns(true);
let arch = await system.arch();
arch.should.equal('64');
mocks[SANDBOX].verify();
});
it('should return correct architecture if it is a 32 bit Windows', async function () {
mocks.os.expects('type').thrice().returns('Windows_NT');
mocks.system.expects('isOSWin64').once().returns(false);
let arch = await system.arch();
arch.should.equal('32');
mocks[SANDBOX].verify();
});
});
it('should know architecture', function () {
return system.arch();
});
});
+82
View File
@@ -0,0 +1,82 @@
import { tempDir, fs } from '../index.js';
import chai from 'chai';
chai.should();
describe('tempdir', function () {
afterEach(function () {
// set the process env as undefiend
delete process.env.APPIUM_TMP_DIR;
});
it('should be able to generate a path', async function () {
const path = await tempDir.path({prefix: 'myfile', suffix: '.tmp'});
path.should.exist;
path.should.include('myfile.tmp');
});
it('should be able to generate a path with process.env.APPIUM_TMP_DIR', async function () {
const preRootDirPath = await tempDir.openDir();
process.env.APPIUM_TMP_DIR = preRootDirPath;
const path = await tempDir.path({prefix: 'myfile', suffix: '.tmp'});
path.should.exist;
path.should.include(preRootDirPath);
path.should.include('myfile.tmp');
});
it('should be able to create a temp file', async function () {
let res = await tempDir.open({prefix: 'my-test-file', suffix: '.zip'});
res.should.exist;
res.path.should.exist;
res.path.should.include('my-test-file.zip');
res.fd.should.exist;
await fs.exists(res.path).should.eventually.be.ok;
});
it('should be able to create a temp file with process.env.APPIUM_TMP_DIR', async function () {
const preRootDirPath = await tempDir.openDir();
process.env.APPIUM_TMP_DIR = preRootDirPath;
let res = await tempDir.open({prefix: 'my-test-file', suffix: '.zip'});
res.should.exist;
res.path.should.exist;
res.path.should.include(preRootDirPath);
res.path.should.include('my-test-file.zip');
res.fd.should.exist;
await fs.exists(res.path).should.eventually.be.ok;
});
it('should generate a random temp dir', async function () {
let res = await tempDir.openDir();
res.should.be.a('string');
await fs.exists(res).should.eventually.be.ok;
let res2 = await tempDir.openDir();
await fs.exists(res2).should.eventually.be.ok;
res.should.not.equal(res2);
});
it('should generate a random temp dir, but the same with process.env.APPIUM_TMP_DIR', async function () {
const preRootDirPath = await tempDir.openDir();
process.env.APPIUM_TMP_DIR = preRootDirPath;
const res = await tempDir.openDir();
res.should.be.a('string');
await fs.exists(res).should.eventually.be.ok;
const res2 = await tempDir.openDir();
await fs.exists(res2).should.eventually.be.ok;
res.should.include(preRootDirPath);
res2.should.include(preRootDirPath);
res.should.not.equal(res2);
});
it('should generate one temp dir used for the life of the process', async function () {
let res = await tempDir.staticDir();
res.should.be.a('string');
await fs.exists(res).should.eventually.be.ok;
let res2 = await tempDir.staticDir();
await fs.exists(res2).should.eventually.be.ok;
res.should.equal(res2);
});
});
+143
View File
@@ -0,0 +1,143 @@
import _ from 'lodash';
import chai from 'chai';
import sinon from 'sinon';
import { timing } from '..';
chai.should();
const expect = chai.expect;
describe('timing', function () {
let processMock;
afterEach(function () {
processMock.verify();
});
describe('no bigint', function () {
const bigintFn = process.hrtime.bigint;
before(function () {
// if the system has BigInt support, remove it
if (_.isFunction(bigintFn)) {
delete process.hrtime.bigint;
}
});
beforeEach(function () {
processMock = sinon.mock(process);
});
after(function () {
if (_.isFunction(bigintFn)) {
process.hrtime.bigint = bigintFn;
}
});
it('should get a start time as array', function () {
const timer = new timing.Timer().start();
_.isArray(timer.startTime).should.be.true;
});
it('should get a duration', function () {
const timer = new timing.Timer().start();
const duration = timer.getDuration();
_.isNumber(duration.nanos).should.be.true;
});
it('should get correct seconds', function () {
processMock.expects('hrtime').twice()
.onFirstCall().returns([12, 12345])
.onSecondCall().returns([13, 54321]);
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asSeconds.should.eql(13.000054321);
});
it('should get correct milliseconds', function () {
processMock.expects('hrtime').twice()
.onFirstCall().returns([12, 12345])
.onSecondCall().returns([13, 54321]);
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asMilliSeconds.should.eql(13000.054321);
});
it('should get correct nanoseconds', function () {
processMock.expects('hrtime').twice()
.onFirstCall().returns([12, 12345])
.onSecondCall().returns([13, 54321]);
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asNanoSeconds.should.eql(13000054321);
});
it('should error if the timer was not started', function () {
const timer = new timing.Timer();
expect(() => timer.getDuration())
.to.throw('Unable to get duration');
});
it('should error if start time is a number', function () {
const timer = new timing.Timer();
timer._startTime = 12345;
expect(() => timer.getDuration())
.to.throw('Unable to get duration');
});
});
describe('bigint', function () {
beforeEach(function () {
// the non-mocked test cannot run if BigInt does not exist,
// and it cannot be mocked. Luckily support was added in Node 10.4.0,
// so it should not be a case where we are testing without this,
// though it still can be a test that Appium is _used_ without it.
if (!_.isFunction(process.hrtime.bigint)) {
return this.skip();
}
processMock = sinon.mock(process.hrtime);
});
function setupMocks (once = false) {
if (once) {
processMock.expects('bigint').once()
.onFirstCall().returns(BigInt(1172941153404030));
} else {
processMock.expects('bigint').twice()
.onFirstCall().returns(BigInt(1172941153404030))
.onSecondCall().returns(BigInt(1172951164887132));
}
}
it('should get a duration', function () {
setupMocks();
const timer = new timing.Timer().start();
const duration = timer.getDuration();
_.isNumber(duration.nanos).should.be.true;
});
it('should get correct seconds', function () {
setupMocks();
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asSeconds.should.be.eql(10.011483102);
});
it('should get correct milliseconds', function () {
setupMocks();
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asMilliSeconds.should.be.eql(10011.483102);
});
it('should get correct nanoseconds', function () {
setupMocks();
const timer = new timing.Timer().start();
const duration = timer.getDuration();
duration.asNanoSeconds.should.be.eql(10011483102);
});
it('should error if the timer was not started', function () {
const timer = new timing.Timer();
expect(() => timer.getDuration())
.to.throw('Unable to get duration');
});
it('should error if passing in a non-bigint', function () {
const timer = new timing.Timer();
timer._startTime = 12345;
expect(() => timer.getDuration())
.to.throw('Unable to get duration');
});
});
});
+137
View File
@@ -0,0 +1,137 @@
import B from 'bluebird';
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import path from 'path';
import * as util from '../lib/util';
import { tempDir, fs } from '../index';
chai.should();
chai.use(chaiAsPromised);
describe('#util', function () {
let tmpRoot;
let tmpFile;
const content = 'YOLO';
beforeEach(async function () {
tmpRoot = await tempDir.openDir();
tmpFile = path.resolve(tmpRoot, 'example.txt');
await fs.writeFile(tmpFile, content, 'utf8');
});
afterEach(async function () {
if (tmpRoot) {
await fs.rimraf(tmpRoot);
}
tmpRoot = null;
});
describe('toInMemoryBase64()', function () {
it('should convert a file to base64 encoding', async function () {
const data = await util.toInMemoryBase64(tmpFile);
const fileContent = await fs.readFile(tmpFile);
data.toString().should.eql(fileContent.toString('base64'));
});
});
describe('getLockFileGuard()', function () {
let tmpRoot;
let lockFile;
let testFile;
async function guardedBehavior (text, msBeforeActing) {
await B.delay(msBeforeActing);
await fs.appendFile(testFile, text, 'utf8');
return text;
}
async function testFileContents () {
return (await fs.readFile(testFile)).toString('utf8');
}
beforeEach(async function () {
tmpRoot = await tempDir.openDir();
lockFile = path.resolve(tmpRoot, 'test.lock');
testFile = path.resolve(tmpRoot, 'test');
await fs.writeFile(testFile, 'a', 'utf8');
});
afterEach(async function () {
try {
await B.all([lockFile, testFile].map((p) => fs.unlink(p)));
} catch (ign) {}
});
it('should lock a file during the given behavior', async function () {
const guard = util.getLockFileGuard(lockFile);
await guard.check().should.eventually.be.false;
const guardPromise = guard(async () => await guardedBehavior('b', 500));
await B.delay(200);
await guard.check().should.eventually.be.true;
await guardPromise;
await guard.check().should.eventually.be.false;
await testFileContents().should.eventually.eql('ab');
});
it('should recover a broken lock file', async function () {
await fs.writeFile(lockFile, 'dummy', 'utf8');
const guard = util.getLockFileGuard(lockFile, {
timeout: 3,
tryRecovery: true,
});
await guard(async () => await guardedBehavior('b', 500));
await guard.check().should.eventually.be.false;
await testFileContents().should.eventually.eql('ab');
});
it('should block other behavior until the lock is released', async function () {
// first prove that without a lock, we get races
await testFileContents().should.eventually.eql('a');
const unguardedPromise1 = guardedBehavior('b', 500);
const unguardedPromise2 = guardedBehavior('c', 100);
await unguardedPromise1;
await unguardedPromise2;
await testFileContents().should.eventually.eql('acb');
// now prove that with a lock, we don't get any interlopers
const guard = util.getLockFileGuard(lockFile);
const guardPromise1 = guard(async () => await guardedBehavior('b', 500));
const guardPromise2 = guard(async () => await guardedBehavior('c', 100));
await guardPromise1;
await guardPromise2;
await testFileContents().should.eventually.eql('acbbc');
});
it('should return the result of the guarded behavior', async function () {
const guard = util.getLockFileGuard(lockFile);
const guardPromise1 = guard(async () => await guardedBehavior('hello', 500));
const guardPromise2 = guard(async () => await guardedBehavior('world', 100));
const ret1 = await guardPromise1;
const ret2 = await guardPromise2;
ret1.should.eql('hello');
ret2.should.eql('world');
});
it('should time out if the lock is not released', async function () {
this.timeout(5000);
const guard = util.getLockFileGuard(lockFile, {timeout: 0.5});
const p1 = guard(async () => await guardedBehavior('hello', 1200));
const p2 = guard(async () => await guardedBehavior('world', 10));
await p2.should.eventually.be.rejectedWith(/not acquire lock/);
await p1.should.eventually.eql('hello');
});
it('should still release lock if guarded behavior fails', async function () {
this.timeout(5000);
const guard = util.getLockFileGuard(lockFile);
const p1 = guard(async () => {
await B.delay(500);
throw new Error('bad');
});
const p2 = guard(async () => await guardedBehavior('world', 100));
await p1.should.eventually.be.rejectedWith(/bad/);
await p2.should.eventually.eql('world');
});
});
});
+530
View File
@@ -0,0 +1,530 @@
import { util, fs, tempDir } from '..';
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import B from 'bluebird';
import sinon from 'sinon';
import os from 'os';
import path from 'path';
import _ from 'lodash';
const {W3C_WEB_ELEMENT_IDENTIFIER} = util;
const should = chai.should();
chai.use(chaiAsPromised);
describe('util', function () {
describe('hasValue', function () {
it('should exist', function () {
should.exist(util.hasValue);
});
it('should handle undefined', function () {
util.hasValue(undefined).should.be.false;
});
it('should handle not a number', function () {
util.hasValue(NaN).should.be.false;
});
it('should handle null', function () {
util.hasValue(null).should.be.false;
});
it('should handle functions', function () {
util.hasValue(function () {}).should.be.true;
});
it('should handle empty arrays', function () {
util.hasValue({}).should.be.true;
});
it('should handle zero', function () {
util.hasValue(0).should.be.true;
});
it('should handle simple string', function () {
util.hasValue('string').should.be.true;
});
it('should handle booleans', function () {
util.hasValue(false).should.be.true;
});
it('should handle empty strings', function () {
util.hasValue('').should.be.true;
});
});
describe('hasContent', function () {
it('should exist', function () {
should.exist(util.hasContent);
});
it('should handle undefined', function () {
util.hasContent(undefined).should.be.false;
});
it('should handle not a number', function () {
util.hasContent(NaN).should.be.false;
});
it('should handle null', function () {
util.hasContent(null).should.be.false;
});
it('should handle functions', function () {
util.hasContent(function () {}).should.be.false;
});
it('should handle empty arrays', function () {
util.hasContent({}).should.be.false;
});
it('should handle zero', function () {
util.hasContent(0).should.be.false;
});
it('should handle simple string', function () {
util.hasContent('string').should.be.true;
});
it('should handle booleans', function () {
util.hasContent(false).should.be.false;
});
it('should handle empty strings', function () {
util.hasContent('').should.be.false;
});
});
describe('escapeSpace', function () {
it('should do nothing to a string without space', function () {
let actual = 'appium';
let expected = 'appium';
util.escapeSpace(actual).should.equal(expected);
});
it('should do escape spaces', function () {
let actual = '/Applications/ Xcode 6.1.1.app/Contents/Developer';
let expected = '/Applications/\\ Xcode\\ 6.1.1.app/Contents/Developer';
util.escapeSpace(actual).should.equal(expected);
});
it('should escape consecutive spaces', function () {
let actual = 'appium space';
let expected = 'appium\\ \\ \\ space';
util.escapeSpace(actual).should.equal(expected);
});
});
describe('localIp', function () {
it('should find a local ip address', function () {
let ifConfigOut = {
lo0:
[
{
address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: '00:00:00:00:00:00',
scopeid: 0,
internal: true,
},
{
address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: '00:00:00:00:00:00',
internal: true,
},
{
address: 'fe80::1',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: '00:00:00:00:00:00',
scopeid: 1,
internal: true,
}
],
en0:
[
{
address: 'xxx',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'd0:e1:40:93:56:9a',
scopeid: 4,
internal: false,
},
{
address: '123.123.123.123',
netmask: '255.255.254.0',
family: 'IPv4',
mac: 'xxx',
internal: false,
}
],
awdl0:
[
{
address: 'xxx',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'xxx',
scopeid: 7,
internal: false,
}
],
};
let osMock = sinon.mock(os);
osMock.expects('networkInterfaces').returns(ifConfigOut);
ifConfigOut = '';
let ip = util.localIp();
ip.should.eql('123.123.123.123');
osMock.verify();
});
});
describe('cancellableDelay', function () {
it('should delay', async function () {
await util.cancellableDelay('10');
});
it('cancel should work', async function () {
let delay = util.cancellableDelay('1000');
await B.delay(10);
delay.cancel();
await delay.should.eventually.be.rejectedWith(/cancellation error/);
});
});
describe('safeJsonParse', function () {
it('should pass object through', function () {
const obj = {a: 'a', b: 'b'};
util.safeJsonParse(obj).should.equal(obj);
});
it('should correctly parse json string', function () {
const obj = {a: 'a', b: 'b'};
util.safeJsonParse(JSON.stringify(obj)).should.eql(obj);
});
it('should pass an array through', function () {
const arr = ['a', 'b'];
util.safeJsonParse(arr).should.eql(arr);
});
it('should correctly parse json array', function () {
const arr = ['a', 'b'];
util.safeJsonParse(JSON.stringify(arr)).should.eql(arr);
});
it('should pass null through', function () {
const obj = null;
_.isNull(util.safeJsonParse(obj)).should.be.true;
});
it('should pass simple string through', function () {
const str = 'str';
util.safeJsonParse(str).should.eql(str);
});
it('should pass a number through', function () {
const num = 42;
util.safeJsonParse(num).should.eql(num);
});
it('should make a number from a string representation', function () {
const num = 42;
util.safeJsonParse(String(num)).should.eql(num);
});
});
describe('jsonStringify', function () {
it('should use JSON.stringify if no Buffer involved', function () {
const obj = {
k1: 'v1',
k2: 'v2',
k3: 'v3',
};
const jsonString = JSON.stringify(obj, null, 2);
util.jsonStringify(obj).should.eql(jsonString);
});
it('should serialize a Buffer', function () {
const obj = {
k1: 'v1',
k2: 'v2',
k3: Buffer.from('hi how are you today'),
};
util.jsonStringify(obj).should.include('hi how are you today');
});
it('should use the replacer function on non-buffer values', function () {
const obj = {
k1: 'v1',
k2: 'v2',
k3: 'v3',
};
function replacer (key, value) {
return _.isString(value) ? value.toUpperCase() : value;
}
const jsonString = util.jsonStringify(obj, replacer);
jsonString.should.include('V1');
jsonString.should.include('V2');
jsonString.should.include('V3');
});
it('should use the replacer function on buffers', function () {
const obj = {
k1: 'v1',
k2: 'v2',
k3: Buffer.from('hi how are you today'),
};
function replacer (key, value) {
return _.isString(value) ? value.toUpperCase() : value;
}
const jsonString = util.jsonStringify(obj, replacer);
jsonString.should.include('V1');
jsonString.should.include('V2');
jsonString.should.include('HI HOW ARE YOU TODAY');
});
it('should use the replacer function recursively', function () {
const obj = {
k1: 'v1',
k2: 'v2',
k3: Buffer.from('hi how are you today'),
k4: {
k5: 'v5',
},
};
function replacer (key, value) {
return _.isString(value) ? value.toUpperCase() : value;
}
const jsonString = util.jsonStringify(obj, replacer);
jsonString.should.include('V1');
jsonString.should.include('V2');
jsonString.should.include('HI HOW ARE YOU TODAY');
jsonString.should.include('V5');
});
});
describe('unwrapElement', function () {
it('should pass through an unwrapped element', function () {
let el = 4;
util.unwrapElement(el).should.equal(el);
});
it('should pass through an element that is an object', function () {
let el = {RANDOM: 4};
util.unwrapElement(el).should.equal(el);
});
it('should unwrap a wrapped element', function () {
let el = {ELEMENT: 4};
util.unwrapElement(el).should.eql(4);
});
it('should unwrap a wrapped element that uses W3C element identifier', function () {
let el = {
[W3C_WEB_ELEMENT_IDENTIFIER]: 5
};
util.unwrapElement(el).should.eql(5);
});
it('should unwrap a wrapped element and prioritize W3C element identifier', function () {
let el = {
ELEMENT: 7,
[W3C_WEB_ELEMENT_IDENTIFIER]: 6,
};
util.unwrapElement(el).should.eql(6);
});
});
describe('wrapElement', function () {
it('should include ELEMENT and w3c element', function () {
util.wrapElement(123).should.eql({
[util.W3C_WEB_ELEMENT_IDENTIFIER]: 123,
ELEMENT: 123,
});
});
});
describe('toReadableSizeString', function () {
it('should fail if cannot convert to Bytes', function () {
(() => util.toReadableSizeString('asdasd')).should.throw(/Cannot convert/);
});
it('should properly convert to Bytes', function () {
util.toReadableSizeString(0).should.equal('0 B');
});
it('should properly convert to KBytes', function () {
util.toReadableSizeString(2048 + 12).should.equal('2.01 KB');
});
it('should properly convert to MBytes', function () {
util.toReadableSizeString(1024 * 1024 * 3 + 1024 * 10).should.equal('3.01 MB');
});
it('should properly convert to GBytes', function () {
util.toReadableSizeString(1024 * 1024 * 1024 * 5).should.equal('5.00 GB');
});
});
describe('filterObject', function () {
describe('with undefined predicate', function () {
it('should filter out undefineds', function () {
let obj = {
a: 'a',
b: 'b',
c: undefined,
};
util.filterObject(obj).should.eql({
a: 'a',
b: 'b',
});
});
it('should leave nulls alone', function () {
let obj = {
a: 'a',
b: 'b',
c: null,
};
util.filterObject(obj).should.eql({
a: 'a',
b: 'b',
c: null,
});
});
});
describe('with value predicate', function () {
it('should filter elements by their value', function () {
let obj = {
a: 'a',
b: 'b',
c: 'c',
d: 'a',
};
util.filterObject(obj, 'a').should.eql({
a: 'a',
d: 'a',
});
});
});
describe('with function predicate', function () {
it('should filter elements', function () {
let obj = {
a: 'a',
b: 'b',
c: 'c',
};
util.filterObject(obj, (v) => v === 'a' || v === 'c').should.eql({
a: 'a',
c: 'c',
});
});
});
});
describe('isSubPath', function () {
it('should detect simple subpath', function () {
util.isSubPath('/root/some', '/root').should.be.true;
});
it('should detect complex subpath', function () {
util.isSubPath('/root/some/other/../../.', '/root').should.be.true;
});
it('should detect subpath ending with a slash', function () {
util.isSubPath('/root/some/', '/root').should.be.true;
});
it('should detect if a path is not a subpath', function () {
util.isSubPath('/root/some//../..', '/root').should.be.false;
});
it('should throw if any of the given paths is not absolute', function () {
should.throw(() => util.isSubPath('some/..', '/root'), /absolute/);
});
});
describe('isSameDestination', function () {
let path1;
let path2;
let tmpDir;
before(async function () {
tmpDir = await tempDir.openDir();
path1 = path.resolve(tmpDir, 'path1.txt');
path2 = path.resolve(tmpDir, 'path2.txt');
for (const p of [path1, path2]) {
await fs.writeFile(p, p, 'utf8');
}
});
after(async function () {
await fs.rimraf(tmpDir);
});
it('should match paths to the same file/folder', async function () {
(await util.isSameDestination(path1, path.resolve(tmpDir, '..', path.basename(tmpDir), path.basename(path1))))
.should.be.true;
});
it('should not match paths if they point to non existing items', async function () {
(await util.isSameDestination(path1, 'blabla')).should.be.false;
});
it('should not match paths to different files', async function () {
(await util.isSameDestination(path1, path2)).should.be.false;
});
});
describe('compareVersions', function () {
it('should compare two correct version numbers', function () {
util.compareVersions('10.0', '<', '11.0').should.eql(true);
util.compareVersions('11.0', '>=', '11.0').should.eql(true);
util.compareVersions('11.0', '==', '11.0').should.eql(true);
util.compareVersions('13.10', '>', '13.5').should.eql(true);
util.compareVersions('11.1', '!=', '11.10').should.eql(true);
util.compareVersions('12.0', '<', 10).should.eql(false);
});
it('should throw if any of version arguments is invalid', function () {
should.throw(() => util.compareVersions(undefined, '<', '11.0'));
should.throw(() => util.compareVersions('11.0', '==', null));
});
it('should throw if comparison operator is unsupported', function () {
should.throw(() => util.compareVersions('12.0', 'abc', 10));
});
});
describe('quote', function () {
it('should quote a string with a space', function () {
util.quote(['a', 'b', 'c d']).should.eql('a b \'c d\'');
});
it('should escape double quotes', function () {
util.quote(['a', 'b', `it's a "neat thing"`]).should.eql(`a b "it's a \\"neat thing\\""`);
});
it("should escape $ ` and '", function () {
util.quote(['$', '`', `'`]).should.eql('\\$ \\` "\'"');
});
it('should handle empty array', function () {
util.quote([]).should.eql('');
});
it('should quote a string with newline', function () {
util.quote(['a\nb']).should.eql(`'a\nb'`);
});
it('should stringify booleans', function () {
util.quote(['a', 1, true, false]).should.eql('a 1 true false');
});
it('should stringify null and undefined', function () {
util.quote(['a', 1, null, undefined]).should.eql('a 1 null undefined');
});
});
describe('unleakString', function () {
it('should unleak a string', function () {
util.unleakString('yolo').should.eql('yolo');
});
it('should unleak a multiline string', function () {
util.unleakString(' yolo\nbolo ').should.eql(' yolo\nbolo ');
});
it('should convert an object to a string', function () {
for (const obj of [{}, null, undefined, [], 0]) {
util.unleakString(obj).should.eql(`${obj}`);
}
});
});
describe('pluralize', function () {
/*
* The pluralize library (https://github.com/blakeembrey/pluralize)
* has a robust set of tests. Here we just need to verify that it
* is usable through the exported package, and the arguments are correct
*/
it('should pluralize a string', function () {
util.pluralize('word', 2).should.eql('words');
});
it('should pluralize a string and prepend the number through boolean', function () {
util.pluralize('word', 2, true).should.eql('2 words');
});
it('should pluralize a string and prepend the number through options', function () {
util.pluralize('word', 2, {inclusive: true}).should.eql('2 words');
});
});
});
+191
View File
@@ -0,0 +1,191 @@
import chai from 'chai';
import chaiAsPromised from 'chai-as-promised';
import path from 'path';
import * as zip from '../lib/zip';
import { tempDir, fs } from '../index';
import { MockReadWriteStream } from './helpers';
chai.use(chaiAsPromised);
describe('#zip', function () {
let assetsPath;
let zippedFilePath;
let tmpRoot;
beforeEach(async function () {
assetsPath = await tempDir.openDir();
tmpRoot = await tempDir.openDir();
const zippedBase64 = 'UEsDBAoAAAAAALlzk0oAAAAAAAAAAAAAAAAJABAAdW56aXBwZWQvVVgMANBO+VjO1vdY9QEUAFBLAwQKAAAAAADAc5NKAAAAAAAAAAAAAAAAEgAQAHVuemlwcGVkL3Rlc3QtZGlyL1VYDADQTvlY19b3WPUBFABQSwMEFAAIAAgAwnOTSgAAAAAAAAAAAAAAABcAEAB1bnppcHBlZC90ZXN0LWRpci9hLnR4dFVYDACDTvlY3Nb3WPUBFADzSM3JyVcIzy/KSQEAUEsHCFaxF0oNAAAACwAAAFBLAwQUAAgACADEc5NKAAAAAAAAAAAAAAAAFwAQAHVuemlwcGVkL3Rlc3QtZGlyL2IudHh0VVgMAINO+Vjf1vdY9QEUAHPLz1dwSiwCAFBLBwhIfrZJCQAAAAcAAABQSwECFQMKAAAAAAC5c5NKAAAAAAAAAAAAAAAACQAMAAAAAAAAAABA7UEAAAAAdW56aXBwZWQvVVgIANBO+VjO1vdYUEsBAhUDCgAAAAAAwHOTSgAAAAAAAAAAAAAAABIADAAAAAAAAAAAQO1BNwAAAHVuemlwcGVkL3Rlc3QtZGlyL1VYCADQTvlY19b3WFBLAQIVAxQACAAIAMJzk0pWsRdKDQAAAAsAAAAXAAwAAAAAAAAAAECkgXcAAAB1bnppcHBlZC90ZXN0LWRpci9hLnR4dFVYCACDTvlY3Nb3WFBLAQIVAxQACAAIAMRzk0pIfrZJCQAAAAcAAAAXAAwAAAAAAAAAAECkgdkAAAB1bnppcHBlZC90ZXN0LWRpci9iLnR4dFVYCACDTvlY39b3WFBLBQYAAAAABAAEADEBAAA3AQAAAAA=';
zippedFilePath = path.resolve(tmpRoot, 'zipped.zip');
await fs.writeFile(zippedFilePath, zippedBase64, 'base64');
await zip.extractAllTo(zippedFilePath, assetsPath);
});
afterEach(async function () {
for (const tmpPath of [assetsPath, tmpRoot]) {
if (!await fs.exists(tmpPath)) {
continue;
}
await fs.rimraf(tmpPath);
}
});
describe('extractAllTo()', function () {
it('should extract contents of a .zip file to a directory', async function () {
await fs.readFile(path.resolve(assetsPath, 'unzipped', 'test-dir', 'a.txt'), {encoding: 'utf8'}).should.eventually.equal('Hello World');
await fs.readFile(path.resolve(assetsPath, 'unzipped', 'test-dir', 'b.txt'), {encoding: 'utf8'}).should.eventually.equal('Foo Bar');
});
});
describe('assertValidZip', function () {
it('should not throw an error if a valid ZIP file is passed', async function () {
await zip.assertValidZip(zippedFilePath).should.eventually.be.fulfilled;
});
it('should throw an error if the file does not exist', async function () {
await zip.assertValidZip('blabla').should.eventually.be.rejected;
});
it('should throw an error if the file is invalid', async function () {
await zip.assertValidZip(path.resolve(assetsPath, 'unzipped', 'test-dir', 'a.txt')).should.eventually.be.rejected;
});
});
describe('readEntries()', function () {
const expectedEntries = [
{name: 'unzipped/'},
{name: 'unzipped/test-dir/'},
{name: 'unzipped/test-dir/a.txt', contents: 'Hello World'},
{name: 'unzipped/test-dir/b.txt', contents: 'Foo Bar'},
];
it('should iterate entries (directories and files) of zip file', async function () {
let i = 0;
await zip.readEntries(zippedFilePath, async ({entry, extractEntryTo}) => {
entry.fileName.should.equal(expectedEntries[i].name);
// If it's a file, test that we can extract it to a temporary directory and that the contents are correct
if (expectedEntries[i].contents) {
await extractEntryTo(tmpRoot);
await fs.readFile(path.resolve(tmpRoot, entry.fileName), {
flags: 'r',
encoding: 'utf8'
}).should.eventually.equal(expectedEntries[i].contents);
}
i++;
});
});
it('should stop iterating zipFile if onEntry callback returns false', async function () {
let i = 0;
await zip.readEntries(zippedFilePath, async () => { // eslint-disable-line require-await
i++;
return false;
});
i.should.equal(1);
});
it('should be rejected if it uses a non-zip file', async function () {
let promise = zip.readEntries(path.resolve(assetsPath, 'unzipped', 'test-dir', 'a.txt'), async () => {});
await promise.should.eventually.be.rejected;
});
});
describe('toInMemoryZip()', function () {
it('should convert a local file to an in-memory zip buffer', async function () {
// Convert directory to in-memory buffer
const testFolder = path.resolve(assetsPath, 'unzipped');
const buffer = await zip.toInMemoryZip(testFolder);
Buffer.isBuffer(buffer).should.be.true;
// Write the buffer to a zip file
await fs.writeFile(path.resolve(tmpRoot, 'test.zip'), buffer);
// Unzip the file and test that it has the same contents as the directory that was zipped
await zip.extractAllTo(path.resolve(tmpRoot, 'test.zip'), path.resolve(tmpRoot, 'output'), {
fileNamesEncoding: 'utf8'
});
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'a.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Hello World');
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'b.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Foo Bar');
});
it('should convert a local folder to an in-memory base64-encoded zip buffer', async function () {
const testFolder = path.resolve(assetsPath, 'unzipped');
const buffer = await zip.toInMemoryZip(testFolder, {
encodeToBase64: true,
});
await fs.writeFile(path.resolve(tmpRoot, 'test.zip'), Buffer.from(buffer.toString(), 'base64'));
// Unzip the file and test that it has the same contents as the directory that was zipped
await zip.extractAllTo(path.resolve(tmpRoot, 'test.zip'), path.resolve(tmpRoot, 'output'));
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'a.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Hello World');
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'b.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Foo Bar');
});
it('should be rejected if use a bad path', async function () {
await zip.toInMemoryZip(path.resolve(assetsPath, 'bad_path'))
.should.be.rejectedWith(/no such/i);
});
it('should be rejected if max size is exceeded', async function () {
const testFolder = path.resolve(assetsPath, 'unzipped');
await zip.toInMemoryZip(testFolder, {
maxSize: 1,
}).should.be.rejectedWith(/must not be greater/);
});
});
describe('_extractEntryTo()', function () {
let entry, mockZipFile, mockZipStream;
beforeEach(async function () {
entry = {fileName: path.resolve(await tempDir.openDir(), 'temp', 'file')};
mockZipStream = new MockReadWriteStream();
mockZipFile = {
openReadStream: (entry, cb) => cb(null, mockZipStream), // eslint-disable-line promise/prefer-await-to-callbacks
};
});
it('should be rejected if zip stream emits an error', async function () {
mockZipStream.pipe = () => {
mockZipStream.emit('error', new Error('zip stream error'));
};
await zip._extractEntryTo(mockZipFile, entry).should.be.rejectedWith('zip stream error');
});
it('should be rejected if write stream emits an error', async function () {
mockZipStream.pipe = (writeStream) => {
writeStream.emit('error', new Error('write stream error'));
mockZipStream.end();
writeStream.end();
};
await zip._extractEntryTo(mockZipFile, entry).should.be.rejectedWith('write stream error');
});
});
describe('toArchive', function () {
it('should zip all files into an archive', async function () {
const testFolder = path.resolve(assetsPath, 'unzipped');
const dstPath = path.resolve(tmpRoot, 'test.zip');
await zip.toArchive(dstPath, {
cwd: testFolder,
});
// Unzip the file and test that it has the same contents as the directory that was zipped
await zip.extractAllTo(dstPath, path.resolve(tmpRoot, 'output'));
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'a.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Hello World');
await fs.readFile(path.resolve(tmpRoot, 'output', 'test-dir', 'b.txt'), {
encoding: 'utf8'
}).should.eventually.equal('Foo Bar');
});
});
});