Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: migrate Dgraph content #98

Draft
wants to merge 54 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
62f7121
initial port
ryanfoxtyler Feb 9, 2025
0751039
.
ryanfoxtyler Feb 11, 2025
2fce6c9
Update index.mdx
ryanfoxtyler Feb 13, 2025
cd78c9b
Merge branch 'main' into dgraph
ryanfoxtyler Feb 21, 2025
ce097e8
update paths
ryanfoxtyler Feb 24, 2025
0e1c84d
.
ryanfoxtyler Feb 25, 2025
17f02c7
.
ryanfoxtyler Feb 26, 2025
f15ca41
.
ryanfoxtyler Feb 26, 2025
656f129
Update trunk.yaml
ryanfoxtyler Feb 26, 2025
8b2b83d
.
ryanfoxtyler Feb 26, 2025
068ff0b
Update overview.mdx
ryanfoxtyler Feb 26, 2025
ab371c0
.
ryanfoxtyler Feb 26, 2025
e09e02c
.
ryanfoxtyler Feb 26, 2025
ce871b8
.
ryanfoxtyler Feb 26, 2025
d4143b7
Update schema.mdx
ryanfoxtyler Feb 26, 2025
0a2a1e3
.
ryanfoxtyler Feb 26, 2025
2542000
Update tips.mdx
ryanfoxtyler Feb 26, 2025
586dfc5
.
ryanfoxtyler Feb 26, 2025
3e31efa
.
ryanfoxtyler Feb 26, 2025
da89a16
Update javascript.mdx
ryanfoxtyler Feb 26, 2025
27f8560
.
ryanfoxtyler Feb 26, 2025
e5764a4
.
ryanfoxtyler Feb 27, 2025
30ae596
Update indexes.mdx
ryanfoxtyler Feb 27, 2025
923d211
Update docs.json
ryanfoxtyler Feb 27, 2025
69bd398
Update docs.json
ryanfoxtyler Feb 27, 2025
ead8ba8
.
ryanfoxtyler Feb 28, 2025
6cc8e04
.
ryanfoxtyler Feb 28, 2025
448f13f
.
ryanfoxtyler Feb 28, 2025
2048092
.
ryanfoxtyler Feb 28, 2025
6c0929c
.
ryanfoxtyler Mar 2, 2025
17e1c36
.
ryanfoxtyler Mar 2, 2025
b5ff22f
.
ryanfoxtyler Mar 2, 2025
f08f7f9
.
ryanfoxtyler Mar 2, 2025
f3469c5
.
ryanfoxtyler Mar 2, 2025
e416eba
.
ryanfoxtyler Mar 2, 2025
49dc725
.
ryanfoxtyler Mar 2, 2025
51826f6
.
ryanfoxtyler Mar 2, 2025
b2d87b2
.
ryanfoxtyler Mar 2, 2025
1d4f3e3
Update introduction.mdx
ryanfoxtyler Mar 2, 2025
8840043
Update http.mdx
ryanfoxtyler Mar 2, 2025
d210ba8
.
ryanfoxtyler Mar 2, 2025
7ca0e3b
.
ryanfoxtyler Mar 2, 2025
ad93c9b
.
ryanfoxtyler Mar 2, 2025
00dc5bc
.
ryanfoxtyler Mar 2, 2025
9d1b686
.
ryanfoxtyler Mar 2, 2025
76a9b9c
Update types-and-operations.mdx
ryanfoxtyler Mar 2, 2025
b46337f
.
ryanfoxtyler Mar 2, 2025
e69e923
.
ryanfoxtyler Mar 3, 2025
46305c0
.
ryanfoxtyler Mar 3, 2025
6a77b6b
.
ryanfoxtyler Mar 3, 2025
2536d25
.
ryanfoxtyler Mar 3, 2025
e32a02a
.
ryanfoxtyler Mar 3, 2025
0de69ce
reduce Dgraph Cloud references
ryanfoxtyler Mar 7, 2025
d01a471
.
ryanfoxtyler Mar 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .trunk/configs/.vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ BasedOnStyles = Vale, Google
Google.Exclamation = OFF
Google.Parens = OFF
Google.We = OFF
Google.Passive = OFF
CommentDelimiters = {/*, */}
12 changes: 6 additions & 6 deletions .trunk/trunk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,20 @@ runtimes:

lint:
enabled:
- renovate@39.166.1
- golangci-lint@1.63.4
- renovate@39.181.0
- golangci-lint@1.64.5
- [email protected]
- [email protected]
- [email protected].369
- [email protected].377
- git-diff-check
- [email protected]
- [email protected].3
- [email protected].0:
- [email protected].4
- [email protected].2:
packages:
- "@mintlify/[email protected]"
- [email protected]
- [email protected]
- [email protected].6
- [email protected].13
- [email protected]
ignore:
- linters: [ALL]
Expand Down
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ The design and hosting of our docs site is provided by
[Mintlify](https://mintlify.com/). The vast majority of configuration is in code
in `mint.json`.

Changes will be deployed to [production](https://docs.hypermode.com)
automatically after pushing to the `main` branch.
Changes are deployed to [production](https://docs.hypermode.com) automatically
after pushing to the `main` branch.

### Development Environment Setup

Expand All @@ -49,15 +49,15 @@ The following components are useful when developing locally:

See live changes as you write and edit.

```bash
```sh
npm i -g mintlify
```

#### Trunk CLI

Format and lint changes for easy merging.

```bash
```sh
npm i -g @trunkio/launcher
```

Expand All @@ -70,7 +70,7 @@ to make it easier to build easy-to-consume documentation.
To spin up a local server, run the following command at the root of the docs
repo:

```bash
```sh
mintlify dev
```

Expand Down Expand Up @@ -102,14 +102,14 @@ types. It is implemented within CI/CD, but also executable locally.
Formatting should run automatically on save. To trigger a manual formatting of
the repo, run:

```bash
```sh
trunk fmt
```

To run lint checks, run:

```bash
trunk check # appending --all will run checks beyond changes on the current branch
```sh
trunk check # appending --all runs checks beyond changes on the current branch
```

Note that Trunk also has a
Expand All @@ -118,7 +118,7 @@ you can install.

However, when installing it please be aware of the `trunk.autoInit` setting,
which is `true` (enabled) by default This controls whether to auto-initialize
trunk in non-trunk repositories - meaning _any_ folder you open with VS Code
will get configured with a `.trunk` subfolder, and will start using Trunk. You
should probably set this to `false` in your VS Code user settings, to not
interfere with other projects you may be working on.
trunk in non-trunk repositories - meaning _any_ folder you open with VS Code is
configured with a `.trunk` subfolder, and starts using Trunk. You should
probably set this to `false` in your VS Code user settings, to not interfere
with other projects you may be working on.
6 changes: 3 additions & 3 deletions badger/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ mode: "wide"
"og:title": "Overview - Badger"
---

## What is Badger? {/* <!-- vale Google.Contractions = NO --> */}
## What is Badger? {/* vale Google.Contractions = NO */}

BadgerDB is an embeddable, persistent, and fast key-value (KV) database written
in pure Go. It's the underlying database for [Dgraph](https://dgraph.io), a
fast, distributed graph database. It's meant to be an efficient alternative to
in pure Go. It is the underlying database for [Dgraph](/dgraph), a fast,
distributed graph database. It is meant to be an efficient alternative to
non-Go-based key-value stores like RocksDB.

## Changelog
Expand Down
75 changes: 38 additions & 37 deletions badger/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ import (

func main() {
// Open the Badger database located in the /tmp/badger directory.
// It will be created if it doesn't exist.
// It is created if it doesn't exist.
db, err := badger.Open(badger.DefaultOptions("/tmp/badger"))
if err != nil {
log.Fatal(err)
Expand All @@ -66,7 +66,7 @@ func main() {
By default, Badger ensures all data persists to disk. It also supports a pure
in-memory mode. When Badger is running in this mode, all data remains in memory
only. Reads and writes are much faster, but Badger loses all stored data in the
case of a crash or close. To open badger in in-memory mode, set the `InMemory`
case of a crash or close. To open Badger in in-memory mode, set the `InMemory`
option.

```go
Expand Down Expand Up @@ -185,8 +185,8 @@ The first argument to `DB.NewTransaction()` is a boolean stating if the
transaction should be writable.

Badger allows an optional callback to the `Txn.Commit()` method. Normally, the
callback can be set to `nil`, and the method will return after all the writes
have succeeded. However, if this callback is provided, the `Txn.Commit()` method
callback can be set to `nil`, and the method returns after all the writes have
succeeded. However, if this callback is provided, the `Txn.Commit()` method
returns as soon as it has checked for any conflicts. The actual writing to the
disk happens asynchronously, and the callback is invoked once the writing has
finished, or an error has occurred. This can improve the throughput of the app
Expand Down Expand Up @@ -288,8 +288,8 @@ for {

Badger provides support for ordered merge operations. You can define a func of
type `MergeFunc` which takes in an existing value, and a value to be _merged_
with it. It returns a new value which is the result of the _merge_ operation.
All values are specified in byte arrays. For e.g., here is a merge function
with it. It returns a new value which is the result of the merge operation. All
values are specified in byte arrays. For example, this is a merge function
(`add`) which appends a `[]byte` value to an existing `[]byte` value.

```go
Expand Down Expand Up @@ -354,7 +354,7 @@ m.Add(uint64ToBytes(3))
res, _ := m.Get() // res should have value 6 encoded
```

## Setting time to live (TTL) and user metadata on keys
## Setting time to live and user metadata on keys

Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL
has elapsed, the key is no longer retrievable and is eligible for garbage
Expand Down Expand Up @@ -458,16 +458,16 @@ db.View(func(txn *badger.Txn) error {
Considering that iteration happens in **byte-wise lexicographical sorting**
order, it's possible to create a sorting-sensitive key. For example, a simple
blog post key might look like:`feed:userUuid:timestamp:postUuid`. Here, the
`timestamp` part of the key is treated as an attribute, and items will be stored
in the corresponding order:
`timestamp` part of the key is treated as an attribute, and items are stored in
the corresponding order:

| Order ASC | Key |
| :-------: | :------------------------------------------------------------ |
| 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
| 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
| 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |
| Order Ascending | Key |
| :-------------: | :------------------------------------------------------------ |
| 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
| 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
| 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |

It's important to properly configure keys for lexicographical sorting to avoid
It is important to properly configure keys for lexicographical sorting to avoid
incorrect ordering.

A **prefix scan** through the preceding keys can be achieved using the prefix
Expand All @@ -486,7 +486,7 @@ identify where to resume.

```go
// startCursor may look like 'feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486'.
// A prefix scan with this cursor will locate the specific key where
// A prefix scan with this cursor locates the specific key where
// the previous iteration stopped.
err = db.badger.View(func(txn *badger.Txn) error {
it := txn.NewIterator(opts)
Expand Down Expand Up @@ -540,12 +540,13 @@ return nextCursor, err

### Key-only iteration

Badger supports a unique mode of iteration called _key-only_ iteration. It's
Badger supports a unique mode of iteration called _key-only_ iteration. It is
several order of magnitudes faster than regular iteration, because it involves
access to the LSM-tree only, which is usually resident entirely in RAM. To
enable key-only iteration, you need to set the `IteratorOptions.PrefetchValues`
field to `false`. This can also be used to do sparse reads for selected keys
during an iteration, by calling `item.Value()` only when required.
access to the Log-structured merge (LSM)-tree only, which is usually resident
entirely in RAM. To enable key-only iteration, you need to set the
`IteratorOptions.PrefetchValues` field to `false`. This can also be used to do
sparse reads for selected keys during an iteration, by calling `item.Value()`
only when required.

```go
err := db.View(func(txn *badger.Txn) error {
Expand All @@ -570,16 +571,16 @@ serially to be sent over network, written to disk, or even written back to
Badger. This is a lot faster way to iterate over Badger than using a single
Iterator. Stream supports Badger in both managed and normal mode.

Stream uses the natural boundaries created by SSTables within the LSM tree, to
quickly generate key ranges. Each goroutine then picks a range and runs an
iterator to iterate over it. Each iterator iterates over all versions of values
and is created from the same transaction, thus working over a snapshot of the
DB. Every time a new key is encountered, it calls `ChooseKey(item)`, followed by
`KeyToList(key, itr)`. This allows a user to select or reject that key, and if
selected, convert the value versions into custom key-values. The goroutine
batches up 4 MB worth of key-values, before sending it over to a channel.
Another goroutine further batches up data from this channel using _smart
batching_ algorithm and calls `Send` serially.
Stream uses the natural boundaries created by SSTables within the Log-structure
merge (LSM)-tree, to quickly generate key ranges. Each goroutine then picks a
range and runs an iterator to iterate over it. Each iterator iterates over all
versions of values and is created from the same transaction, thus working over a
snapshot of the DB. Every time a new key is encountered, it calls
`ChooseKey(item)`, followed by `KeyToList(key, itr)`. This allows a user to
select or reject that key, and if selected, convert the value versions into
custom key-values. The goroutine batches up 4 MB worth of key-values, before
sending it over to a channel. Another goroutine further batches up data from
this channel using _smart batching_ algorithm and calls `Send` serially.

This framework is designed for high throughput key-value iteration, spreading
the work of iteration across many goroutines. `DB.Backup` uses this framework to
Expand Down Expand Up @@ -624,9 +625,9 @@ if err := stream.Orchestrate(context.Background()); err != nil {

Badger values need to be garbage collected, because of two reasons:

- Badger keeps values separately from the LSM tree. This means that the
compaction operations that clean up the LSM tree do not touch the values at
all. Values need to be cleaned up separately.
- Badger keeps values separately from the Log-structure merge (LSM)-tree. This
means that the compaction operations that clean up the LSM tree do not touch
the values at all. Values need to be cleaned up separately.

- Concurrent read/write transactions could leave behind multiple values for a
single key, because they're stored with different versions. These could
Expand All @@ -639,9 +640,9 @@ appropriate time:

- `DB.RunValueLogGC()`: This method is designed to do garbage collection while
Badger is online. Along with randomly picking a file, it uses statistics
generated by the LSM-tree compactions to pick files that are likely to lead to
maximum space reclamation. It's recommended to be called during periods of low
activity in your system, or periodically. One call would only result in
generated by the LSM tree compactions to pick files that are likely to lead to
maximum space reclamation. It is recommended to be called during periods of
low activity in your system, or periodically. One call would only result in
removal of at max one log file. As an optimization, you could also immediately
re-run it whenever it returns nil error (indicating a successful value log
GC), as shown below.
Expand Down
28 changes: 14 additions & 14 deletions badger/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@

If you're using Badger with `SyncWrites=false`, then your writes might not be
written to value log and won't get synced to disk immediately. Writes to LSM
tree are done inmemory first, before they get compacted to disk. The compaction
tree are done in-memory first, before they get compacted to disk. The compaction
would only happen once `BaseTableSize` has been reached. So, if you're doing a
few writes and then checking, you might not see anything on disk. Once you
`Close` the database, you'll see these writes on disk.
Expand Down Expand Up @@ -87,10 +87,10 @@
panic: send on closed channel
```

If you're seeing panics like above, this would be because you're operating on a
closed DB. This can happen, if you call `Close()` before sending a write, or
multiple times. You should ensure that you only call `Close()` once, and all
your read/write operations finish before closing.
If you're seeing panics like this, it is because you're operating on a closed
DB. This can happen, if you call `Close()` before sending a write, or multiple
times. You should ensure that you only call `Close()` once, and all your
read/write operations finish before closing.

## Are there any Go specific settings that I should use?

Expand All @@ -109,42 +109,42 @@

## I see "manifest has unsupported version: X (we support Y)" error

This error means you have a badger directory which was created by an older

Check failure on line 112 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.
version of badger and you're trying to open in a newer version of badger. The

Check failure on line 113 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.

Check failure on line 113 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.
underlying data format can change across badger versions and users have to

Check failure on line 114 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.
migrate their data directory. Badger data can be migrated from version X of
badger to version Y of badger by following the steps listed below. Assume you

Check failure on line 116 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.

Check failure on line 116 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.
were on badger v1.6.0 and you wish to migrate to v2.0.0 version.

Check failure on line 117 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.

1. Install badger version v1.6.0
1. Install Badger version v1.6.0

- `cd $GOPATH/src/github.com/dgraph-io/badger`
- `git checkout v1.6.0`
- `cd badger && go install`

This should install the old badger binary in your `$GOBIN`.
This should install the old Badger binary in your `$GOBIN`.

2. Create Backup
- `badger backup --dir path/to/badger/directory -f badger.backup`
3. Install badger version v2.0.0
3. Install Badger version v2.0.0

- `cd $GOPATH/src/github.com/dgraph-io/badger`
- `git checkout v2.0.0`
- `cd badger && go install`

This should install the new badger binary in your `$GOBIN`.
This should install the new Badger binary in your `$GOBIN`.

4. Restore data from backup

- `badger restore --dir path/to/new/badger/directory -f badger.backup`

This creates a new directory on `path/to/new/badger/directory` and add
badger data in newer format to it.
This creates a new directory on `path/to/new/badger/directory` and adds
data in the new format to it.

NOTE - The preceding steps shouldn't cause any data loss but please ensure the
new data is valid before deleting the old badger directory.
new data is valid before deleting the old Badger directory.

## Why do I need gcc to build badger? Does badger need Cgo?

Check failure on line 147 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.

Check failure on line 147 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] Use 'Badger' instead of 'badger'.

Check failure on line 147 in badger/troubleshooting.mdx

View check run for this annotation

Trunk.io / Trunk Check

vale(error)

[new] 'Why do I need gcc to build badger? Does badger need Cgo?' should use sentence-style capitalization.

Badger doesn't directly use Cgo but it relies on https://github.com/DataDog/zstd
library for zstd compression and the library requires
Expand All @@ -162,6 +162,6 @@
<Note>
Yes they're compatible both ways. The only exception is 0 bytes of input which
gives 0 bytes output with the Go zstd. But you already have the
zstd.WithZeroFrames(true) which will wrap 0 bytes in a header so it can be fed
to DD zstd. This is only relevant when downgrading.
zstd.WithZeroFrames(true) which wraps 0 bytes in a header so it can be fed to
DD zstd. This is only relevant when downgrading.
</Note>
2 changes: 1 addition & 1 deletion create-project.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ creating your first Modus app, visit the [Modus quickstart](modus/quickstart).
Next, initialize the app with Hypermode through the [Hyp CLI](/hyp-cli) and link
your GitHub repo with your Modus app to Hypermode using:

```bash
```sh
hyp link
```

Expand Down
2 changes: 1 addition & 1 deletion deploy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ in the [app manifest](/modus/app-manifest).
After you push your Modus app to GitHub, you can link your Hypermode project to
the repo through the Hyp CLI.

```bash
```sh
hyp link
```

Expand Down
Loading