@jhanna.garcia has joined the channel
Not across continents. GCR has continent wide redundancy - locations are US, Europe, Asia
ECR is s3 backed, might have replication built in.
ok, so I've setup a kubernetes cluster locally in KVM VMs here at the house. So far so good.
The issue I'm running into currently though is I need persistent volumes for cockroachdb and I'm not entirely sure how to do that best.
You could use cockroach db to store your cockroachdb?
I thought there was another one, but I can't remember it off the top of my head.
it'll use CockroachDB as a storage provider? So I can use CockroachDB to provide the storage for my CockroachDB? :slightly_smiling_face:
maybe minio was the other one I was thinking of. Which rook uses
jesus this is complicated just to get something simple done. Freakin' Kubernetes. :slightly_smiling_face:
that should be the channel topic lol
wait, this all wants to run as containers, so I still have the same issue really
I'm just shifting it down a layer
All problems in computer science can be solved with another layer of indirection.
do you have storage somewhere that you can use? Like a NAS that presents as iscsi?
that's a native PV
oh shit. I see bakins typing
he'll solve everything
If you must be on prem, you can look at the local volume stuff. I don’t recall if it’s beta or release yet
I can do NFS/iSCSI/etc (it's a linux box after all!)
I don't think local volume can be dynamic (which is fine)
I just don't know how to manually create the claim and make sure that cassandra uses it
that part is.... not clear.
Openebs is another I believe
Ok, so I can create a PV via NFS
how do I turn that into a PVC?
A pvc should kind of happen automatically with your deployment, IIRC
the PVC will try to get a PV that will fit the claim. Sometimes depending on the cloud provider handler it's an all in one step.
This makes me believe it isn't going to automatically work though:
This tutorial assumes that dynamic volume provisioning is available. When that is not the case, persistent volume claims need to be created manually.
Maybe the thing you are really looking for is a statefulset with a PVC template
@nshttpd so I can make a single NFS volume to support multiple claims?
If you are trying to do cockroach or cassandra,
You might need mutiple nfs mounts?
No clue. I don't know how any of this shit works. :slightly_smiling_face:
On the other hand, setting up some manner of dynamic thingie would help when I finally deploy this to linode since that will work like that there.
I'm having a stupid problem just getting an image from a registry on my machine that's working on a coworkers machine totally fine
What the heck
❯ docker pull registry.url.123/helm:latest Error response from daemon: mediaType in manifest should be 'application/vnd.docker.distribution.manifest.v2+json' not ''
Looking at the manifest, mediaType is specified properly for every layer
And it also works fine on another machine of mine
Look up the documentation on kubernetes and NFS, you need to define a storage class, then PVCs allocated of that type will provision PVs automatically from the NFS server
@jski that is a weird one I've never seen, do you have a really old version of docker?
Nah, 19.03, so the latest edge
authenticated to a private repo?
almost sounds like it's giving back an error and docker can't parse it.
All I can think of is, we're generating the images using
buildah instead of
docker build now, so they're building as oci-compliant images instead of... regular I guess
But it worked since that change so I don't think that's it
OCI OCIR is standard registry stuff
Yeah, I logged out/in to confirm it, same thing
have you tried to push with podman?
see if it is docker that doesn't like it?
nfs-client-provisioner looks like the ticket
and........ it requires helm
guess I'm setting that up
Oh, I may have found a way to set it up without helm
I am -1 helm personally
Setting up cert-manager recently,
And the non helm kube apply worked way better
I'm trying to learn kube so I don't need to layer too much shit on top just yet
helm is just chef for K8S
Normal ExternalProvisioning 13s (x6 over 76s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
Ok, so now what?
I thought that was supposed to be automatic?
Oh, there we go
I didn't update the rbac.yaml to point to the correct namespace
save yourself a bunch of trouble in the future and set TILLER_NAMESPACE variable in your bash config
unless you're putting it in default I guess but you probably shouldn't?
Is TILLER_NAMESPACE used by not helm?
but, the namespace is hard-coded in the file, so that probably wouldn't matter?
you need the helm command to know the namespace where you actually deployed tiller
I'm not using helm
oh I saw where you were talking about it and then didn't see teh next line where you said you weren't
my b :slightly_smiling_face:
it's ok. :slightly_smiling_face:
so, now, how do I use that? :slightly_smiling_face:
do I need to somehow mark it as the default storage class or whatever?
and does it being in a different namespace cause me any trouble?
Warning FailedScheduling 28s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
so yeah, I'm missing something here
it should be in the same namespace where you are deploying cockroach.
I was hoping for a general purpose solution not bound to a particular namespace
but, I guess I can alter that expectation
then you just define the Volume in the Pod to use
I set my new storage class to default (there was no default one before)
but it's still not working
I wonder if I need to add this:
annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
that's from the test claim, which did work
but does that work in volumeClaimTemplates?
no, that doesn't seem to have helped
I'm missing something (probably obvious)
I can get the test claim to work just fine, but the cockroachdb config won't bind to it
wonko@deepthought:~/Documents/projects/Chremoas/kubernetes$ kubectl get persistentvolumeclaims NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-cockroachdb-0 Pending 3h38m datadir-cockroachdb-1 Pending 3h38m datadir-cockroachdb-2 Pending 3h38m test-claim Bound pvc-2730e59d-76bb-4e4b-9aa2-210dd264d71c 1Mi RWX managed-nfs-storage 8s
Each pos needs its own pvc right?
Hmmm, yeah, out of my element here. Certainly looks like your cockroach pvcs are not using the right storage class.
Unless you set that one as default, which you said you did.
But that may have been after those pvcs were created. What if you delete them?