SSL_Handshake: close nio channel when NioClient fail to handshake wit… #10153
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR solved an issue when cs-agent throws an exception during ssl handshake; the TCP connection is not closed between cs-server and cs-agent, which further causes the server thread to hang forever.
When the ssl handshake is at the client key exchange phrase, the server will be waiting for the agent to provide cipher suit, but at the agent side the exception could happen when the agent can’t conform to the cipher suite that the server provides, so the agent couldn’t communicate client key to the server. Thus at the server side the handshake thread is forever pending on a function that expects to read packets from SocketChannel.
Steps to reproduce this issue
1.server uses a 1024 bit rsa public key which you can verify by typing “keytool -list -storepass $keystore_password -keystore $keystore_file -v”.
2.find “Subject Public Key Algorithm” in the step 1 output.
2.at the agent, edit “JAVA_HOME/jre/lib/security/java.security” and append “RSA keySize < 2048” to jdk.tls.disabledAlgorithms.
3.restart cloudstack-agent.
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
How Has This Been Tested?
Before I applied this change, in that situation and at the agent side, the state of the tcp connection was CLOSE_WAIT and remained there forever. When I applied this change, the agent actively closed the channel which in turn actively closed the tcp connection and the state of the tcp connection will move to TIME_WAIT which is a normal state indicating the connection is closing.