I'm working on an early iteration of an operator, which I've scaffolded using operator-sdk. I've tried my best to follow the examples from the Operator SDK Golang Tutorial and the Kubebuilder book. I've found that I can deploy and run my operator to a local cluster, but I'm unable to run the test suite. My tests always produce a panic: runtime error: invalid memory address or nil pointer dereference
, which I've tracked down to the fact that the Scheme
is always Nil. But so far, I haven't been able to figure out why that is.
In theory, I could skip the tests and just test out the operator in my local cluster, but that's going to be really brittle long-term. I'd like to be able to do TDD, and more importantly I'd like to have a test suite to go along with the operator to help maintain quality once it's in maintenance mode.
Here's my suite_test.go
, which I'm modified as little as possible from the scaffolded version (the changes I have made are from the Kubebuilder Book):
package controllers import ( "path/filepath" "testing" . "github.com/onsi/ginkgo" . "github.com/onsi/gomega" "k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/rest" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/envtest" "sigs.k8s.io/controller-runtime/pkg/envtest/printer" logf "sigs.k8s.io/controller-runtime/pkg/log" "sigs.k8s.io/controller-runtime/pkg/log/zap" mybatch "mycorp.com/mybatch-operator/api/v1alpha1" // +kubebuilder:scaffold:imports ) // These tests use Ginkgo (BDD-style Go testing framework). Refer to // http://onsi.github.io/ginkgo/ to learn more about Ginkgo. var cfg *rest.Config var k8sClient client.Client var testEnv *envtest.Environment func TestAPIs(t *testing.T) { RegisterFailHandler(Fail) RunSpecsWithDefaultAndCustomReporters(t, "Controller Suite", []Reporter{printer.NewlineReporter{}}) } var _ = BeforeSuite(func(done Done) { logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true))) By("bootstrapping test environment") testEnv = &envtest.Environment{ CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")}, } cfg, err := testEnv.Start() Expect(err).NotTo(HaveOccurred()) Expect(cfg).NotTo(BeNil()) err = mybatch.AddToScheme(scheme.Scheme) Expect(err).NotTo(HaveOccurred()) // +kubebuilder:scaffold:scheme k8sManager, err := ctrl.NewManager(cfg, ctrl.Options{ Scheme: scheme.Scheme, }) Expect(err).ToNot(HaveOccurred()) err = (&MyBatchReconciler{ Client: k8sManager.GetClient(), Log: ctrl.Log.WithName("controllers").WithName("MyBatch"), }).SetupWithManager(k8sManager) Expect(err).ToNot(HaveOccurred()) go func() { err = k8sManager.Start(ctrl.SetupSignalHandler()) Expect(err).ToNot(HaveOccurred()) }() k8sClient = k8sManager.GetClient() Expect(k8sClient).ToNot(BeNil()) close(done) }, 60) var _ = AfterSuite(func() { By("tearing down the test environment") err := testEnv.Stop() Expect(err).NotTo(HaveOccurred()) })
Here's the test block that causes it to fail. I have a second Describe
block (not shown here), which tests some of the business logic outside of the Reconcile
function, and that works fine.
package controllers import ( "context" "time" . "github.com/onsi/ginkgo" . "github.com/onsi/gomega" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "github.com/jarcoal/httpmock" mybatch "mycorp.com/mybatch-operator/api/v1alpha1" ) var _ = Describe("BatchController", func() { Describe("Reconcile", func() { // Define utility constants for object names and testing timeouts/durations and intervals. const ( BatchName = "test-batch" BatchNamespace = "default" BatchImage = "mycorp/mockserver:latest" timeout = time.Second * 10 duration = time.Second * 10 interval = time.Millisecond * 250 ) Context("When deploying MyBatch", func() { It("Should create a new Batch instance", func() { ctx := context.Background() // Define stub Batch testCR := &mybatch.MyBatch{ TypeMeta: metav1.TypeMeta{ APIVersion: "mybatch.mycorp.com/v1alpha1", Kind: "MyBatch", }, ObjectMeta: metav1.ObjectMeta{ Name: BatchName, Namespace: BatchNamespace, }, Spec: mybatch.MyBatchSpec{ Replicas: 1, StatusCheck: mybatch.StatusCheck{ Url: "http://mycorp.com", Endpoint: "/rest/jobs/jobexecutions/active", PollSeconds: 20, }, Image: BatchImage, PodSpec: corev1.PodSpec{ // For simplicity, we only fill out the required fields. Containers: []corev1.Container{ { Name: "test-container", Image: BatchImage, }, }, RestartPolicy: corev1.RestartPolicyAlways, }, }, } Expect(k8sClient.Create(ctx, testCR)).Should(Succeed()) lookupKey := types.NamespacedName{Name: BatchName, Namespace: BatchNamespace} createdBatch := &mybatch.MyBatch{} // We'll need to retry getting this newly created Batch, given that creation may not immediately happen. Eventually(func() bool { err := k8sClient.Get(ctx, lookupKey, createdBatch) if err != nil { return false } return true }, timeout, interval).Should(BeTrue()) // Check the container name Expect(createdBatch.Spec.PodSpec.Containers[0].Name).Should(Equal(BatchName)) }) }) }) })
Is there something I'm missing here that's preventing Scheme
from being properly initialized? I have to admit that I don't really understand much of the code around the Scheme
. I'm happy to show additional code if it will help.
没有评论:
发表评论