Files
cypress/packages/server/lib/socket.coffee
Zach Bloomquist c1a345dce2 Improved proxy support (#3531)
* https-proxy: unused file

* server: wrap all https requests that use a proxy

* server: use request lib in ensureUrl if proxy is in use. this makes runs tab work behind a proxy

* electron: pass --proxy-server to app itself, so the embedded github login page works

* cli: first attempt at env vars from windows registry

* cli: api cleanup

* cli: lint

* cli: fix crash on no proxy, add tests

* add desktop-gui watch to terminals.json

* cli: pass along --proxy-source

* electron: pass --proxy-bypass-list too

* server: whitelist proxy* args

* cli: better wording

* desktop-gui: display proxy settings

* extension: force proxy [wip]

* extension: finally, i am victorious over coffeescript

* extension: add -loopback to bypasslist

* extension: revert changes

Revert "extension: force proxy [wip]"

This reverts commit 3ab6ba42a763f25ee65f12eb8b79eb597efc9b11.

* desktop-gui: skip proxysettings if there aren't any

* https-proxy, server: proxy directConnections using https-proxy-agent

* https-agent: pool httpsAgents

* https-proxy: work when they're not on a proxy

* https-proxy: ci - use agent 1.0

* https-proxy: tests

* desktop-gui: hide proxy settings when not using proxy

* https-proxy: pass req through to https-proxy-agent callback

* cli: use get-windows-proxy

* desktop-gui: always show proxy settings

* server: use get-windows-proxy

* electron, server: supply electron proxy config when window launched

* server: fix

* https-proxy: cleanup

* server: clean up ensureUrl

* https-proxy: cleanup

* cli: fix

* cli: fix destructuring

* server: enable ForeverAgent to pool HTTPS/HTTP connections

#3192

* server: updating snapshot

* https-proxy: don't crash, do error if proxy unreachable

* https-proxy:

* get-windows-proxy@1.0.0

* https-proxy: use proxy-from-env to decide on a proxy for a url

* server: fallback to HTTP_PROXY globally if HTTPS_PROXY not set

* server: proxy args test

* cli: add proxy tests

* cli: add test that loadSystemProxySettings is called during download

* cli, server: account for the fact that CI has some proxy vars set

* https-proxy: ""

* cli, https-proxy, server: ""

* desktop-gui: update settings gui

* desktop-gui: cypress tests for proxy settings

* server: strict undefined check

* cli, server: move get-windows-proxy to scope, optionalDeps

* server, cli: use new and improved get-windows-proxy

* cli, server: 1.5.0

* server: re-check for proxy since cli may have failed to load the lib

* server, cli: 1.5.1

* server: NO_PROXY=localhost by default, clean up

* https-proxy: disable Nagle's on proxy sockets

\#3192

* https-proxy: use setNoDelay on upstream, cache https agent

* https-proxy: test basic auth

* https-proxy: add todo: remove this

* server: add custom HTTP(s) Agent implementation w keepalive, tunneling

* server: typescript for agent

* add ts to zunder

* server: more ts

* ts: add missing Agent type declaration

* server: create CombinedAgent

* server: use agent in more places

* ts: more declarations

* server: make script work even if debug port not supplied

* server: begin some testing

* server, ts: agent, tests

* server: test

* server: agent works with websockets now

* server: update snapshot

* server: work out some more bugs with websockets

* server: more websockets

* server: add net_profiler

* https-proxy: fix dangling socket on direct connection

* server: fix potential 'headers have already been sent'

* https-proxy: nab another dangler

* server: update test to expect agent

* https-proxy: fix failing test

* desktop-gui: change on-link

* server: add tests for empty response case

* server: tests

* server: send keep-alive with requests

* server: make net profiler hook on socket.connect

* server: only hook profiler once

* server: update tests, add keep-alive test

* server: only regen headers if needed

* server: move http_overrides into CombinedAgent, make it proxy-proof

for #112

* server: update snapshot

* server: undo

* server: avoid circular dependency

* https-proxy, server: use our Agent instead of https-proxy-agent

* server: add dependency back

* cli: actually use proxy for download

* server, launcher, ts: typescript

* Revert "server, launcher, ts: typescript"

This reverts commit d3f8b8bbb6.

* Revert "Revert "server, launcher, ts: typescript""

This reverts commit 818dfdfd00.

* ts, server: respond to PR

* server, ts: types

* ts: really fix types

* https-proxy, server: export CA from https-proxy

* agent, server, https-proxy: move agent to own package

* agent => networking, move connect into networking

* fix tests

* fix test

* networking: respond to PR changes, add more unit tests

* rename ctx

* networking, ts: add more tests

* server: add ensureUrl tests

* https-proxy: remove https-proxy-agent

* server: use CombinedAgent for API

* server: updates

* add proxy performance tests

* add perf tests to workflow

* circle

* run perf tests with --no-sandbox

* networking, ts: ch-ch-ch-ch-changes

* server, networking: pr changes

* run networking tests in circle

* server: fix performance test

* https-proxy: test that sockets are being closed

* https-proxy: write, not emit

* networking: fix test

* networking: bubble err in connect

* networking: style

* networking: clean p connect error handling

* networking => network

* server: make perf tests really work

* server: really report

* server: use args from browser

* server: use AI to determine max run time

* server: load electron only when needed


Co-authored-by: Brian Mann <brian@cypress.io>
2019-03-31 23:39:10 -04:00

360 lines
11 KiB
CoffeeScript

_ = require("lodash")
path = require("path")
debug = require('debug')('cypress:server:socket')
Promise = require("bluebird")
socketIo = require("@packages/socket")
fs = require("./util/fs")
open = require("./util/open")
pathHelpers = require("./util/path_helpers")
cwd = require("./cwd")
exec = require("./exec")
task = require("./task")
files = require("./files")
fixture = require("./fixture")
errors = require("./errors")
automation = require("./automation")
preprocessor = require("./plugins/preprocessor")
runnerEvents = [
"reporter:restart:test:run"
"runnables:ready"
"run:start"
"test:before:run:async"
"reporter:log:add"
"reporter:log:state:changed"
"paused"
"test:after:hooks"
"run:end"
]
reporterEvents = [
# "go:to:file"
"runner:restart"
"runner:abort"
"runner:console:log"
"runner:console:error"
"runner:show:snapshot"
"runner:hide:snapshot"
"reporter:restarted"
]
retry = (fn) ->
Promise.delay(25).then(fn)
isSpecialSpec = (name) ->
name.endsWith("__all")
class Socket
constructor: (config) ->
if not (@ instanceof Socket)
return new Socket(config)
@ended = false
@onTestFileChange = @onTestFileChange.bind(@)
if config.watchForFileChanges
preprocessor.emitter.on("file:updated", @onTestFileChange)
onTestFileChange: (filePath) ->
debug("test file changed %o", filePath)
fs.statAsync(filePath)
.then =>
@io.emit("watched:file:changed")
.catch ->
debug("could not find test file that changed %o", filePath)
## TODO: clean this up by sending the spec object instead of
## the url path
watchTestFileByPath: (config, originalFilePath, options) ->
## files are always sent as integration/foo_spec.js
## need to take into account integrationFolder may be different so
## integration/foo_spec.js becomes cypress/my-integration-folder/foo_spec.js
debug("watch test file %o", originalFilePath)
filePath = path.join(config.integrationFolder, originalFilePath.replace("integration#{path.sep}", ""))
filePath = path.relative(config.projectRoot, filePath)
## bail if this is special path like "__all"
## maybe the client should not ask to watch non-spec files?
return if isSpecialSpec(filePath)
## bail if we're already watching this exact file
return if filePath is @testFilePath
## remove the existing file by its path
if @testFilePath
preprocessor.removeFile(@testFilePath, config)
## store this location
@testFilePath = filePath
debug("will watch test file path %o", filePath)
preprocessor.getFile(filePath, config)
## ignore errors b/c we're just setting up the watching. errors
## are handled by the spec controller
.catch ->
toReporter: (event, data) ->
@io and @io.to("reporter").emit(event, data)
toRunner: (event, data) ->
@io and @io.to("runner").emit(event, data)
isSocketConnected: (socket) ->
socket and socket.connected
onAutomation: (socket, message, data, id) ->
## instead of throwing immediately here perhaps we need
## to make this more resilient by automatically retrying
## up to 1 second in the case where our automation room
## is empty. that would give padding for reconnections
## to automatically happen.
## for instance when socket.io detects a disconnect
## does it immediately remove the member from the room?
## YES it does per http://socket.io/docs/rooms-and-namespaces/#disconnection
if @isSocketConnected(socket)
socket.emit("automation:request", id, message, data)
else
throw new Error("Could not process '#{message}'. No automation clients connected.")
createIo: (server, path, cookie) ->
socketIo.server(server, {
path: path
destroyUpgrade: false
serveClient: false
cookie: cookie
})
startListening: (server, automation, config, options) ->
existingState = null
_.defaults options,
socketId: null
onSetRunnables: ->
onMocha: ->
onConnect: ->
onRequest: ->
onResolveUrl: ->
onFocusTests: ->
onSpecChanged: ->
onChromiumRun: ->
onReloadBrowser: ->
checkForAppErrors: ->
onSavedStateChanged: ->
onTestFileChange: ->
automationClient = null
{integrationFolder, socketIoRoute, socketIoCookie} = config
@testsDir = integrationFolder
@io = @createIo(server, socketIoRoute, socketIoCookie)
automation.use({
onPush: (message, data) =>
@io.emit("automation:push:message", message, data)
})
onAutomationClientRequestCallback = (message, data, id) =>
@onAutomation(automationClient, message, data, id)
automationRequest = (message, data) ->
automation.request(message, data, onAutomationClientRequestCallback)
@io.on "connection", (socket) =>
debug("socket connected")
## cache the headers so we can access
## them at any time
headers = socket.request?.headers ? {}
socket.on "automation:client:connected", =>
return if automationClient is socket
automationClient = socket
debug("automation:client connected")
## if our automation disconnects then we're
## in trouble and should probably bomb everything
automationClient.on "disconnect", =>
## if we've stopped then don't do anything
return if @ended
## if we are in headless mode then log out an error and maybe exit with process.exit(1)?
Promise.delay(500)
.then =>
## bail if we've swapped to a new automationClient
return if automationClient isnt socket
## give ourselves about 500ms to reconnected
## and if we're connected its all good
return if automationClient.connected
## TODO: if all of our clients have also disconnected
## then don't warn anything
errors.warning("AUTOMATION_SERVER_DISCONNECTED")
## TODO: no longer emit this, just close the browser and display message in reporter
@io.emit("automation:disconnected")
socket.on "automation:push:request", (message, data, cb) =>
automation.push(message, data)
## just immediately callback because there
## is not really an 'ack' here
cb() if cb
socket.on "automation:response", automation.response
socket.on "automation:request", (message, data, cb) =>
debug("automation:request %s %o", message, data)
automationRequest(message, data)
.then (resp) ->
cb({response: resp})
.catch (err) ->
cb({error: errors.clone(err)})
socket.on "reporter:connected", =>
return if socket.inReporterRoom
socket.inReporterRoom = true
socket.join("reporter")
## TODO: what to do about reporter disconnections?
socket.on "runner:connected", ->
return if socket.inRunnerRoom
socket.inRunnerRoom = true
socket.join("runner")
## TODO: what to do about runner disconnections?
socket.on "spec:changed", (spec) ->
options.onSpecChanged(spec)
socket.on "watch:test:file", (filePath, cb = ->) =>
@watchTestFileByPath(config, filePath, options)
## callback is only for testing purposes
cb()
socket.on "app:connect", (socketId) ->
options.onConnect(socketId, socket)
socket.on "set:runnables", (runnables, cb) =>
options.onSetRunnables(runnables)
cb()
socket.on "mocha", =>
options.onMocha.apply(options, arguments)
socket.on "open:finder", (p, cb = ->) ->
open.opn(p)
.then -> cb()
socket.on "reload:browser", (url, browser) ->
options.onReloadBrowser(url, browser)
socket.on "focus:tests", ->
options.onFocusTests()
socket.on "is:automation:client:connected", (data = {}, cb) =>
isConnected = =>
automationRequest("is:automation:client:connected", data)
tryConnected = =>
Promise
.try(isConnected)
.catch ->
retry(tryConnected)
## retry for up to data.timeout
## or 1 second
Promise
.try(tryConnected)
.timeout(data.timeout ? 1000)
.then ->
cb(true)
.catch Promise.TimeoutError, (err) ->
cb(false)
socket.on "backend:request", (eventName, args...) =>
## cb is always the last argument
cb = args.pop()
debug("backend:request %o", { eventName, args })
backendRequest = ->
switch eventName
when "preserve:run:state"
existingState = args[0]
null
when "resolve:url"
[url, resolveOpts] = args
options.onResolveUrl(url, headers, automationRequest, resolveOpts)
when "http:request"
options.onRequest(headers, automationRequest, args[0])
when "get:fixture"
fixture.get(config.fixturesFolder, args[0], args[1])
when "read:file"
files.readFile(config.projectRoot, args[0], args[1])
when "write:file"
files.writeFile(config.projectRoot, args[0], args[1], args[2])
when "exec"
exec.run(config.projectRoot, args[0])
when "task"
task.run(config.pluginsFile, args[0])
else
throw new Error(
"You requested a backend event we cannot handle: #{eventName}"
)
Promise.try(backendRequest)
.then (resp) ->
cb({response: resp})
.catch (err) ->
cb({error: errors.clone(err)})
socket.on "get:existing:run:state", (cb) ->
if (s = existingState)
existingState = null
cb(s)
else
cb()
socket.on "save:app:state", (state, cb) ->
options.onSavedStateChanged(state)
## we only use the 'ack' here in tests
cb() if cb
socket.on "external:open", (url) ->
require("electron").shell.openExternal(url)
reporterEvents.forEach (event) =>
socket.on event, (data) =>
@toRunner(event, data)
runnerEvents.forEach (event) =>
socket.on event, (data) =>
@toReporter(event, data)
end: ->
@ended = true
## TODO: we need an 'ack' from this end
## event from the other side
@io and @io.emit("tests:finished")
changeToUrl: (url) ->
@toRunner("change:to:url", url)
close: ->
preprocessor.emitter.removeListener("file:updated", @onTestFileChange)
@io?.close()
module.exports = Socket