Merge branch 'master' of github.com:kubernetes-incubator/client-python into release-1.0

This commit is contained in:
mbohlool 2017-02-21 14:05:49 -08:00
commit 054edcf08b
15 changed files with 873 additions and 147 deletions

View File

@ -1,3 +1,15 @@
# v1.0.0b2
- Support exec calls in both interactive and non-interactive mode #58
# v1.0.0b1
- Support insecure-skip-tls-verify config flag #99
- Added example for using yaml files as models #63
- Added end to end tests #41, #94
- Bugfix: Fix ValueError in list_namespaced_config_map #104
- Bugfix: Export missing models #101
- Bugfix: Patch operations #93
# v1.0.0a5
- Bugfix: Missing fields in some models #85, kubernetes/kubernetes#39465

136
devel/release.md Normal file
View File

@ -0,0 +1,136 @@
# Release process
Release process of python client involve creating (or updating) a release
branch, update release tags, create distribution packages and upload them to
pip.
## Change logs
Make sure changes logs are up to date [here](https://github.com/kubernetes-incubator/client-python/blob/master/CHANGELOG.md).
If they are not, follow commits added after last release and update/commit
the change logs to master.
## Release branch
Release branch name should have release-x.x format. All minor and pre-releases
should be on the same branch. To update an existing branch:
```bash
export RELEASE_BRANCH=release-x.x
git checkout RELEASE_BRANCH
git fetch upstream
git pull upstream/master
git push origin RELEASE_BRANCH
```
You may need to fix some conflicts. For auto-generated files, you can commit
either version. They will be updated to the current version in the next step.
## Sanity check generated client
We need to make sure there is no API changes after running update client
scripts. Such changes should be committed to master branch first. Run this
command:
```bash
scripts/update-client.sh
```
And make sure there is no API change (version number changes should be fine
as they will be updated in next step anyway). Do not commit any changes at
this step and go back to master branch if there is any API changes.
## Update release tags
Release tags are in scripts/constants.py file. These are the constants you may
need to update:
CLIENT_VERSION: Client version should follow x.y.zDn where x,y,z are version
numbers (integers) and D is one of "a" for alpha or "b" for beta and n is the
pre-release number. For a final release, "Dn" part should be omitted. Examples:
1.0.0a1, 2.0.1b2, 1.5.1
SPEC_VERSION: This would be the kubernetes OpenAPI spec version. It should be
deprecated after kubernetes/kubernetes#37055 takes effect.
DEVELOPMENT_STATUS: Update it to one of the values of "Development Status"
in [this list](https://pypi.python.org/pypi?%3Aaction=list_classifiers).
after changing constants to proper versions, update the client using this
command:
```bash
scripts/update-client.sh
```
and commit changes (should be only version number changes) to the release branch.
Name the commit something like "Update version constants for XXX release".
```bash
git push upstream RELEASE_BRANCH
```
## Make distribution packages
First make sure you are using a clean version of python. Use virtualenv and
pyenv packages, make sure you are using python 2.7.12. I would normally do this
on a clean machine:
(install [pyenv](https://github.com/yyuu/pyenv#installation))
(install [pip](https://pip.pypa.io/en/stable/installing/))
(install [virtualenv](https://virtualenv.pypa.io/en/stable/installation/))
```bash
git clean -xdf
pyenv install 2.7.12
pyenv global 2.7.12
virtualenv .release
source .release/bin/activate
python --version # Make sure you get Python 2.7.12
pip install twine
```
Now we need to create a "~/.pypirc" with this content:
```
[distutils]
index-servers=pypi
[pypi]
repository = https://upload.pypi.org/legacy/
username = kubernetes
```
TODO: we should be able to pass these parameters to twine directly. My first attempt failed.
Now that the environment is ready, lets create distribution packages:
```bash
python setup.py sdist
python setup.py bdist_wheel --universal
ls dist/
```
You should see two files in dist folder. kubernetes*.whl and kubernetes*.tar.gz.
TODO: We need a dry-run option an some way to test package upload process to pypi.
If everything looks good, run this command to upload packages to pypi:
```
twine upload dist/*
```
## Create github release
Create a gihub release by starting from
[this page(https://github.com/kubernetes-incubator/client-python/releases).
Click Deaft new release button. Name the tag the same as CLIENT_VERSION. Change
the target branch to "release-x.y"
ref: https://packaging.python.org/distributing/
## Cleanup
```bash
deactivate
rm -rf .release
```
TODO: Convert steps in this document to an (semi-) automated script.

26
devel/stats.md Normal file
View File

@ -0,0 +1,26 @@
# Download Statistics
Pypi stores download information in a [BigQuery public dataset](https://bigquery.cloud.google.com/dataset/the-psf:pypi). It can be queried to get detail infomration about downloads. For example, to get number of downloads per version, you can run this query:
```sql
SELECT
file.version,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
TIMESTAMP("20161120"),
CURRENT_TIMESTAMP()
)
where file.project == "kubernetes"
GROUP BY
file.version
ORDER BY
total_downloads DESC
LIMIT 20
```
More example queries can be found [here](https://gist.github.com/alex/4f100a9592b05e9b4d63)
Reference: https://mail.python.org/pipermail/distutils-sig/2016-May/028986.html

94
examples/exec.py Normal file
View File

@ -0,0 +1,94 @@
import time
from kubernetes import config
from kubernetes.client import configuration
from kubernetes.client.apis import core_v1_api
from kubernetes.client.rest import ApiException
config.load_kube_config()
configuration.assert_hostname = False
api = core_v1_api.CoreV1Api()
name = 'busybox-test'
resp = None
try:
resp = api.read_namespaced_pod(name=name,
namespace='default')
except ApiException as e:
if e.status != 404:
print("Unknown error: %s" % e)
exit(1)
if not resp:
print("Pod %s does not exits. Creating it..." % name)
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': name
},
'spec': {
'containers': [{
'image': 'busybox',
'name': 'sleep',
"args": [
"/bin/sh",
"-c",
"while true;do date;sleep 5; done"
]
}]
}
}
resp = api.create_namespaced_pod(body=pod_manifest,
namespace='default')
while True:
resp = api.read_namespaced_pod(name=name,
namespace='default')
if resp.status.phase != 'Pending':
break
time.sleep(1)
print("Done.")
# calling exec and wait for response.
exec_command = [
'/bin/sh',
'-c',
'echo This message goes to stderr >&2; echo This message goes to stdout']
resp = api.connect_get_namespaced_pod_exec(name, 'default',
command=exec_command,
stderr=True, stdin=False,
stdout=True, tty=False)
print("Response: " + resp)
# Calling exec interactively.
exec_command = ['/bin/sh']
resp = api.connect_get_namespaced_pod_exec(name, 'default',
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
commands = [
"echo test1",
"echo \"This message goes to stderr\" >&2",
]
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
print("STDOUT: %s" % resp.read_stdout())
if resp.peek_stderr():
print("STDERR: %s" % resp.read_stderr())
if commands:
c = commands.pop(0)
print("Running command... %s\n" % c)
resp.write_stdin(c + "\n")
else:
break
resp.write_stdin("date\n")
sdate = resp.readline_stdout(timeout=3)
print("Server date command returns: %s" % sdate)
resp.write_stdin("whoami\n")
user = resp.readline_stdout(timeout=3)
print("Server user is: %s" % user)

View File

@ -21,6 +21,7 @@ Copyright 2016 SmartBear Software
from __future__ import absolute_import
from . import models
from . import ws_client
from .rest import RESTClientObject
from .rest import ApiException
@ -343,6 +344,15 @@ class ApiClient(object):
"""
Makes the HTTP request using RESTClient.
"""
# FIXME(dims) : We need a better way to figure out which
# calls end up using web sockets
if url.endswith('/exec') and (method == "GET" or method == "POST"):
return ws_client.websocket_call(self.config,
url,
query_params=query_params,
_request_timeout=_request_timeout,
_preload_content=_preload_content,
headers=headers)
if method == "GET":
return self.rest_client.GET(url,
query_params=query_params,

View File

@ -85,6 +85,9 @@ class ConfigurationObject(object):
self.cert_file = None
# client key file
self.key_file = None
# check host name
# Set this to True/False to enable/disable SSL hostname verification.
self.assert_hostname = None
@property
def logger_file(self):

View File

@ -95,13 +95,20 @@ class RESTClientObject(object):
# key file
key_file = config.key_file
kwargs = {
'num_pools': pools_size,
'cert_reqs': cert_reqs,
'ca_certs': ca_certs,
'cert_file': cert_file,
'key_file': key_file,
}
if config.assert_hostname is not None:
kwargs['assert_hostname'] = config.assert_hostname
# https pool manager
self.pool_manager = urllib3.PoolManager(
num_pools=pools_size,
cert_reqs=cert_reqs,
ca_certs=ca_certs,
cert_file=cert_file,
key_file=key_file
**kwargs
)
def request(self, method, url, query_params=None, headers=None,

View File

@ -0,0 +1,237 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from .rest import ApiException
import select
import certifi
import time
import collections
from websocket import WebSocket, ABNF, enableTrace
import six
import ssl
from six.moves.urllib.parse import urlencode
from six.moves.urllib.parse import quote_plus
STDIN_CHANNEL = 0
STDOUT_CHANNEL = 1
STDERR_CHANNEL = 2
class WSClient:
def __init__(self, configuration, url, headers):
"""A websocket client with support for channels.
Exec command uses different channels for different streams. for
example, 0 is stdin, 1 is stdout and 2 is stderr. Some other API calls
like port forwarding can forward different pods' streams to different
channels.
"""
enableTrace(False)
header = []
self._connected = False
self._channels = {}
self._all = ""
# We just need to pass the Authorization, ignore all the other
# http headers we get from the generated code
if headers and 'authorization' in headers:
header.append("authorization: %s" % headers['authorization'])
if url.startswith('wss://') and configuration.verify_ssl:
ssl_opts = {
'cert_reqs': ssl.CERT_REQUIRED,
'keyfile': configuration.key_file,
'certfile': configuration.cert_file,
'ca_certs': configuration.ssl_ca_cert or certifi.where(),
}
if configuration.assert_hostname is not None:
ssl_opts['check_hostname'] = configuration.assert_hostname
else:
ssl_opts = {'cert_reqs': ssl.CERT_NONE}
self.sock = WebSocket(sslopt=ssl_opts, skip_utf8_validation=False)
self.sock.connect(url, header=header)
self._connected = True
def peek_channel(self, channel, timeout=0):
"""Peek a channel and return part of the input,
empty string otherwise."""
self.update(timeout=timeout)
if channel in self._channels:
return self._channels[channel]
return ""
def read_channel(self, channel, timeout=0):
"""Read data from a channel."""
if channel not in self._channels:
ret = self.peek_channel(channel, timeout)
else:
ret = self._channels[channel]
if channel in self._channels:
del self._channels[channel]
return ret
def readline_channel(self, channel, timeout=None):
"""Read a line from a channel."""
if timeout is None:
timeout = float("inf")
start = time.time()
while self.is_open() and time.time() - start < timeout:
if channel in self._channels:
data = self._channels[channel]
if "\n" in data:
index = data.find("\n")
ret = data[:index]
data = data[index+1:]
if data:
self._channels[channel] = data
else:
del self._channels[channel]
return ret
self.update(timeout=(timeout - time.time() + start))
def write_channel(self, channel, data):
"""Write data to a channel."""
self.sock.send(chr(channel) + data)
def peek_stdout(self, timeout=0):
"""Same as peek_channel with channel=1."""
return self.peek_channel(STDOUT_CHANNEL, timeout=timeout)
def read_stdout(self, timeout=None):
"""Same as read_channel with channel=1."""
return self.read_channel(STDOUT_CHANNEL, timeout=timeout)
def readline_stdout(self, timeout=None):
"""Same as readline_channel with channel=1."""
return self.readline_channel(STDOUT_CHANNEL, timeout=timeout)
def peek_stderr(self, timeout=0):
"""Same as peek_channel with channel=2."""
return self.peek_channel(STDERR_CHANNEL, timeout=timeout)
def read_stderr(self, timeout=None):
"""Same as read_channel with channel=2."""
return self.read_channel(STDERR_CHANNEL, timeout=timeout)
def readline_stderr(self, timeout=None):
"""Same as readline_channel with channel=2."""
return self.readline_channel(STDERR_CHANNEL, timeout=timeout)
def read_all(self):
"""Read all of the inputs with the same order they recieved. The channel
information would be part of the string. This is useful for
non-interactive call where a set of command passed to the API call and
their result is needed after the call is concluded.
TODO: Maybe we can process this and return a more meaningful map with
channels mapped for each input.
"""
out = self._all
self._all = ""
self._channels = {}
return out
def is_open(self):
"""True if the connection is still alive."""
return self._connected
def write_stdin(self, data):
"""The same as write_channel with channel=0."""
self.write_channel(STDIN_CHANNEL, data)
def update(self, timeout=0):
"""Update channel buffers with at most one complete frame of input."""
if not self.is_open():
return
if not self.sock.connected:
self._connected = False
return
r, _, _ = select.select(
(self.sock.sock, ), (), (), timeout)
if r:
op_code, frame = self.sock.recv_data_frame(True)
if op_code == ABNF.OPCODE_CLOSE:
self._connected = False
return
elif op_code == ABNF.OPCODE_BINARY or op_code == ABNF.OPCODE_TEXT:
data = frame.data
if six.PY3:
data = data.decode("utf-8")
self._all += data
if len(data) > 1:
channel = ord(data[0])
data = data[1:]
if data:
if channel not in self._channels:
self._channels[channel] = data
else:
self._channels[channel] += data
def run_forever(self, timeout=None):
"""Wait till connection is closed or timeout reached. Buffer any input
received during this time."""
if timeout:
start = time.time()
while self.is_open() and time.time() - start < timeout:
self.update(timeout=(timeout - time.time() + start))
else:
while self.is_open():
self.update(timeout=None)
WSResponse = collections.namedtuple('WSResponse', ['data'])
def websocket_call(configuration, url, query_params, _request_timeout,
_preload_content, headers):
"""An internal function to be called in api-client when a websocket
connection is required."""
# switch protocols from http to websocket
url = url.replace('http://', 'ws://')
url = url.replace('https://', 'wss://')
# patch extra /
url = url.replace('//api', '/api')
# Extract the command from the list of tuples
commands = None
for key, value in query_params:
if key == 'command':
commands = value
break
# drop command from query_params as we will be processing it separately
query_params = [(key, value) for key, value in query_params if
key != 'command']
# if we still have query params then encode them
if query_params:
url += '?' + urlencode(query_params)
# tack on the actual command to execute at the end
if isinstance(commands, list):
for command in commands:
url += "&command=%s&" % quote_plus(command)
else:
url += '&command=' + quote_plus(commands)
try:
client = WSClient(configuration, url, headers)
if not _preload_content:
return client
client.run_forever(timeout=_request_timeout)
return WSResponse('%s' % ''.join(client.read_all()))
except (Exception, KeyboardInterrupt, SystemExit) as e:
raise ApiException(status=0, reason=str(e))

View File

@ -0,0 +1,46 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os
import unittest
import urllib3
from kubernetes.client.configuration import configuration
from kubernetes.config import kube_config
DEFAULT_E2E_HOST = '127.0.0.1'
def get_e2e_configuration():
config = copy.copy(configuration)
config.host = None
if os.path.exists(
os.path.expanduser(kube_config.KUBE_CONFIG_DEFAULT_LOCATION)):
kube_config.load_kube_config(client_configuration=config)
else:
print('Unable to load config from %s' %
kube_config.KUBE_CONFIG_DEFAULT_LOCATION)
for url in ['https://%s:8443' % DEFAULT_E2E_HOST,
'http://%s:8080' % DEFAULT_E2E_HOST]:
try:
urllib3.PoolManager().request('GET', url)
config.host = url
config.verify_ssl = False
break
except urllib3.exceptions.HTTPError:
pass
if config.host is None:
raise unittest.SkipTest('Unable to find a running Kubernetes instance')
print('Running test against : %s' % config.host)
config.assert_hostname = False
return config

View File

@ -0,0 +1,59 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import uuid
from kubernetes.client import api_client
from kubernetes.client.apis import batch_v1_api
from kubernetes.e2e_test import base
class TestClientBatch(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.config = base.get_e2e_configuration()
def test_job_apis(self):
client = api_client.ApiClient(config=self.config)
api = batch_v1_api.BatchV1Api(client)
name = 'test-job-' + str(uuid.uuid4())
job_manifest = {
'kind': 'Job',
'spec': {
'template':
{'spec':
{'containers': [
{'image': 'busybox',
'name': name,
'command': ["sh", "-c", "sleep 5"]
}],
'restartPolicy': 'Never'},
'metadata': {'name': name}}},
'apiVersion': 'batch/v1',
'metadata': {'name': name}}
resp = api.create_namespaced_job(
body=job_manifest, namespace='default')
self.assertEqual(name, resp.metadata.name)
resp = api.read_namespaced_job(
name=name, namespace='default')
self.assertEqual(name, resp.metadata.name)
resp = api.delete_namespaced_job(
name=name, body={}, namespace='default')

View File

@ -12,65 +12,98 @@
# License for the specific language governing permissions and limitations
# under the License.
"""
test_client
----------------------------------
Tests for `client` module. Deploy Kubernetes using:
http://kubernetes.io/docs/getting-started-guides/docker/
and then run this test
"""
import time
import unittest
import urllib3
import uuid
from kubernetes.client import api_client
from kubernetes.client.apis import core_v1_api
from kubernetes.e2e_test import base
def _is_k8s_running():
try:
urllib3.PoolManager().request('GET', '127.0.0.1:8080')
return True
except urllib3.exceptions.HTTPError:
return False
def short_uuid():
id = str(uuid.uuid4())
return id[-12:]
class TestClient(unittest.TestCase):
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_list_endpoints(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
api = core_v1_api.CoreV1Api(client)
endpoints = api.list_endpoints_for_all_namespaces()
self.assertTrue(len(endpoints.items) > 0)
@classmethod
def setUpClass(cls):
cls.config = base.get_e2e_configuration()
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_pod_apis(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
client = api_client.ApiClient(config=self.config)
api = core_v1_api.CoreV1Api(client)
name = 'test-' + str(uuid.uuid4())
pod_manifest = {'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {'color': 'blue', 'name': name},
'spec': {'containers': [{'image': 'dockerfile/redis',
'name': 'redis'}]}}
name = 'busybox-test-' + short_uuid()
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': name
},
'spec': {
'containers': [{
'image': 'busybox',
'name': 'sleep',
"args": [
"/bin/sh",
"-c",
"while true;do date;sleep 5; done"
]
}]
}
}
resp = api.create_namespaced_pod(body=pod_manifest,
namespace='default')
self.assertEqual(name, resp.metadata.name)
self.assertTrue(resp.status.phase)
resp = api.read_namespaced_pod(name=name,
namespace='default')
self.assertEqual(name, resp.metadata.name)
self.assertTrue(resp.status.phase)
while True:
resp = api.read_namespaced_pod(name=name,
namespace='default')
self.assertEqual(name, resp.metadata.name)
self.assertTrue(resp.status.phase)
if resp.status.phase != 'Pending':
break
time.sleep(1)
exec_command = ['/bin/sh',
'-c',
'for i in $(seq 1 3); do date; done']
resp = api.connect_get_namespaced_pod_exec(name, 'default',
command=exec_command,
stderr=False, stdin=False,
stdout=True, tty=False)
print('EXEC response : %s' % resp)
self.assertEqual(3, len(resp.splitlines()))
exec_command = 'uptime'
resp = api.connect_post_namespaced_pod_exec(name, 'default',
command=exec_command,
stderr=False, stdin=False,
stdout=True, tty=False)
print('EXEC response : %s' % resp)
self.assertEqual(1, len(resp.splitlines()))
resp = api.connect_post_namespaced_pod_exec(name, 'default',
command='/bin/sh',
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
resp.write_stdin("echo test string 1\n")
line = resp.readline_stdout(timeout=5)
self.assertFalse(resp.peek_stderr())
self.assertEqual("test string 1", line)
resp.write_stdin("echo test string 2 >&2\n")
line = resp.readline_stderr(timeout=5)
self.assertFalse(resp.peek_stdout())
self.assertEqual("test string 2", line)
resp.write_stdin("exit\n")
resp.update(timeout=5)
self.assertFalse(resp.is_open())
number_of_pods = len(api.list_pod_for_all_namespaces().items)
self.assertTrue(number_of_pods > 0)
@ -78,31 +111,30 @@ class TestClient(unittest.TestCase):
resp = api.delete_namespaced_pod(name=name, body={},
namespace='default')
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_service_apis(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
client = api_client.ApiClient(config=self.config)
api = core_v1_api.CoreV1Api(client)
name = 'frontend-' + short_uuid()
service_manifest = {'apiVersion': 'v1',
'kind': 'Service',
'metadata': {'labels': {'name': 'frontend'},
'name': 'frontend',
'metadata': {'labels': {'name': name},
'name': name,
'resourceversion': 'v1'},
'spec': {'ports': [{'name': 'port',
'port': 80,
'protocol': 'TCP',
'targetPort': 80}],
'selector': {'name': 'frontend'}}}
'selector': {'name': name}}}
resp = api.create_namespaced_service(body=service_manifest,
namespace='default')
self.assertEqual('frontend', resp.metadata.name)
self.assertEqual(name, resp.metadata.name)
self.assertTrue(resp.status)
resp = api.read_namespaced_service(name='frontend',
resp = api.read_namespaced_service(name=name,
namespace='default')
self.assertEqual('frontend', resp.metadata.name)
self.assertEqual(name, resp.metadata.name)
self.assertTrue(resp.status)
service_manifest['spec']['ports'] = [{'name': 'new',
@ -110,29 +142,28 @@ class TestClient(unittest.TestCase):
'protocol': 'TCP',
'targetPort': 8080}]
resp = api.patch_namespaced_service(body=service_manifest,
name='frontend',
name=name,
namespace='default')
self.assertEqual(2, len(resp.spec.ports))
self.assertTrue(resp.status)
resp = api.delete_namespaced_service(name='frontend',
resp = api.delete_namespaced_service(name=name,
namespace='default')
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_replication_controller_apis(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
client = api_client.ApiClient(config=self.config)
api = core_v1_api.CoreV1Api(client)
name = 'frontend-' + short_uuid()
rc_manifest = {
'apiVersion': 'v1',
'kind': 'ReplicationController',
'metadata': {'labels': {'name': 'frontend'},
'name': 'frontend'},
'metadata': {'labels': {'name': name},
'name': name},
'spec': {'replicas': 2,
'selector': {'name': 'frontend'},
'selector': {'name': name},
'template': {'metadata': {
'labels': {'name': 'frontend'}},
'labels': {'name': name}},
'spec': {'containers': [{
'image': 'nginx',
'name': 'nginx',
@ -141,29 +172,27 @@ class TestClient(unittest.TestCase):
resp = api.create_namespaced_replication_controller(
body=rc_manifest, namespace='default')
self.assertEqual('frontend', resp.metadata.name)
self.assertEqual(name, resp.metadata.name)
self.assertEqual(2, resp.spec.replicas)
resp = api.read_namespaced_replication_controller(
name='frontend', namespace='default')
self.assertEqual('frontend', resp.metadata.name)
name=name, namespace='default')
self.assertEqual(name, resp.metadata.name)
self.assertEqual(2, resp.spec.replicas)
resp = api.delete_namespaced_replication_controller(
name='frontend', body={}, namespace='default')
name=name, body={}, namespace='default')
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_configmap_apis(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
client = api_client.ApiClient(config=self.config)
api = core_v1_api.CoreV1Api(client)
name = 'test-configmap-' + short_uuid()
test_configmap = {
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "test-configmap",
"name": name,
},
"data": {
"config.json": "{\"command\":\"/usr/bin/mysqld_safe\"}",
@ -174,26 +203,24 @@ class TestClient(unittest.TestCase):
resp = api.create_namespaced_config_map(
body=test_configmap, namespace='default'
)
self.assertEqual('test-configmap', resp.metadata.name)
self.assertEqual(name, resp.metadata.name)
resp = api.read_namespaced_config_map(
name='test-configmap', namespace='default')
self.assertEqual('test-configmap', resp.metadata.name)
name=name, namespace='default')
self.assertEqual(name, resp.metadata.name)
test_configmap['data']['config.json'] = "{}"
resp = api.patch_namespaced_config_map(
name='test-configmap', namespace='default', body=test_configmap)
name=name, namespace='default', body=test_configmap)
resp = api.delete_namespaced_config_map(
name='test-configmap', body={}, namespace='default')
name=name, body={}, namespace='default')
resp = api.list_namespaced_config_map('kube-system', pretty=True)
resp = api.list_namespaced_config_map('default', pretty=True)
self.assertEqual([], resp.items)
@unittest.skipUnless(
_is_k8s_running(), "Kubernetes is not available")
def test_node_apis(self):
client = api_client.ApiClient('http://127.0.0.1:8080/')
client = api_client.ApiClient(config=self.config)
api = core_v1_api.CoreV1Api(client)
for item in api.list_node().items:

View File

@ -0,0 +1,95 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import uuid
import yaml
from kubernetes.client import api_client
from kubernetes.client.apis import extensions_v1beta1_api
from kubernetes.client.configuration import configuration
from kubernetes.client.models import v1_delete_options
from kubernetes.e2e_test import base
class TestClientExtensions(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.config = base.get_e2e_configuration()
def test_create_deployment(self):
client = api_client.ApiClient(config=self.config)
api = extensions_v1beta1_api.ExtensionsV1beta1Api(client)
name = 'nginx-deployment-' + str(uuid.uuid4())
deployment = '''apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: %s
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
'''
resp = api.create_namespaced_deployment(
body=yaml.load(deployment % name),
namespace="default")
resp = api.read_namespaced_deployment(name, 'default')
self.assertIsNotNone(resp)
options = v1_delete_options.V1DeleteOptions()
resp = api.delete_namespaced_deployment(name, 'default', body=options)
def test_create_daemonset(self):
client = api_client.ApiClient(config=self.config)
api = extensions_v1beta1_api.ExtensionsV1beta1Api(client)
name = 'nginx-app-' + str(uuid.uuid4())
daemonset = {
'apiVersion': 'extensions/v1beta1',
'kind': 'DaemonSet',
'metadata': {
'labels': {'app': 'nginx'},
'name': '%s' % name,
},
'spec': {
'template': {
'metadata': {
'labels': {'app': 'nginx'},
'name': name},
'spec': {
'containers': [
{'name': 'nginx-app',
'image': 'nginx:1.10'},
],
},
},
'updateStrategy': {
'type': 'RollingUpdate',
},
}
}
resp = api.create_namespaced_daemon_set('default', body=daemonset)
resp = api.read_namespaced_daemon_set(name, 'default')
self.assertIsNotNone(resp)
options = v1_delete_options.V1DeleteOptions()
resp = api.delete_namespaced_daemon_set(name, 'default', body=options)

View File

@ -1,9 +1,9 @@
certifi >= 14.05.14
six == 1.8.0
six>=1.9.0
python_dateutil >= 2.5.3
setuptools >= 21.0.0
urllib3 >= 1.19.1
pyyaml >= 3.12
oauth2client >= 4.0.0
ipaddress >= 1.0.17
websocket-client>=0.32.0

View File

@ -20,7 +20,7 @@ function clean_exit(){
local error_code="$?"
local spawned=$(jobs -p)
if [ -n "$spawned" ]; then
kill $(jobs -p)
sudo kill $(jobs -p)
fi
return $error_code
}
@ -49,75 +49,48 @@ sudo systemctl start docker.service --ignore-dependencies
echo "Checking docker service"
sudo docker ps
# Run the docker containers for kubernetes
echo "Starting Kubernetes containers"
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
--name=kubelet \
-d \
gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--allow-privileged=true --v=2
echo "Download Kubernetes CLI"
wget -O kubectl "http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl"
chmod 755 kubectl
./kubectl get nodes
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
set +x
echo "Waiting for master components to start..."
for i in {1..300}
do
running_count=$(./kubectl -s=http://127.0.0.1:8080 get pods --no-headers 2>/dev/null | grep "Running" | wc -l)
# We expect to have 3 running pods - etcd, master and kube-proxy.
if [ "$running_count" -ge 3 ]; then
break
fi
echo -n "."
sleep 1
done
set -x
echo "Download localkube from minikube project"
wget -O localkube "https://storage.googleapis.com/minikube/k8sReleases/v1.6.0-alpha.0/localkube-linux-amd64"
sudo chmod +x localkube
sudo mv localkube /usr/local/bin/
echo "SUCCESS"
echo "Cluster created!"
echo ""
echo "Starting localkube"
sudo nohup localkube --logtostderr=true --enable-dns=false > localkube.log 2>&1 &
echo "Waiting for localkube to start..."
if ! timeout 120 sh -c "while ! curl -ks https://127.0.0.1:8443/ >/dev/null; do sleep 1; done"; then
sudo cat localkube.log
die $LINENO "localkube did not start"
fi
echo "Dump Kubernetes Objects..."
./kubectl -s=http://127.0.0.1:8080 get componentstatuses
./kubectl -s=http://127.0.0.1:8080 get configmaps
./kubectl -s=http://127.0.0.1:8080 get daemonsets
./kubectl -s=http://127.0.0.1:8080 get deployments
./kubectl -s=http://127.0.0.1:8080 get events
./kubectl -s=http://127.0.0.1:8080 get endpoints
./kubectl -s=http://127.0.0.1:8080 get horizontalpodautoscalers
./kubectl -s=http://127.0.0.1:8080 get ingress
./kubectl -s=http://127.0.0.1:8080 get jobs
./kubectl -s=http://127.0.0.1:8080 get limitranges
./kubectl -s=http://127.0.0.1:8080 get nodes
./kubectl -s=http://127.0.0.1:8080 get namespaces
./kubectl -s=http://127.0.0.1:8080 get pods
./kubectl -s=http://127.0.0.1:8080 get persistentvolumes
./kubectl -s=http://127.0.0.1:8080 get persistentvolumeclaims
./kubectl -s=http://127.0.0.1:8080 get quota
./kubectl -s=http://127.0.0.1:8080 get resourcequotas
./kubectl -s=http://127.0.0.1:8080 get replicasets
./kubectl -s=http://127.0.0.1:8080 get replicationcontrollers
./kubectl -s=http://127.0.0.1:8080 get secrets
./kubectl -s=http://127.0.0.1:8080 get serviceaccounts
./kubectl -s=http://127.0.0.1:8080 get services
kubectl get componentstatuses
kubectl get configmaps
kubectl get daemonsets
kubectl get deployments
kubectl get events
kubectl get endpoints
kubectl get horizontalpodautoscalers
kubectl get ingress
kubectl get jobs
kubectl get limitranges
kubectl get nodes
kubectl get namespaces
kubectl get pods
kubectl get persistentvolumes
kubectl get persistentvolumeclaims
kubectl get quota
kubectl get resourcequotas
kubectl get replicasets
kubectl get replicationcontrollers
kubectl get secrets
kubectl get serviceaccounts
kubectl get services
echo "Running tests..."

View File

@ -6,6 +6,7 @@ passenv = TOXENV CI TRAVIS TRAVIS_*
usedevelop = True
install_command = pip install -U {opts} {packages}
deps = -r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt
commands =
python -V
nosetests []
@ -21,12 +22,12 @@ commands =
[testenv:py27-functional]
commands =
python -V
{toxinidir}/scripts/kube-init.sh nosetests []
{toxinidir}/scripts/kube-init.sh nosetests -v []
[testenv:py35-functional]
commands =
python -V
{toxinidir}/scripts/kube-init.sh nosetests []
{toxinidir}/scripts/kube-init.sh nosetests -v []
[testenv:coverage]
commands =