1
0
mirror of https://github.com/jbranchaud/til synced 2026-01-14 04:28:02 +00:00

Compare commits

..

2 Commits

Author SHA1 Message Date
nick-w-nick
5f6d236be4 Merge 295fe153ad into df3492d4ef 2024-08-22 12:30:44 -04:00
nick-w-nick
295fe153ad added mention of ES6 compatibility
Hello, I've added a small blockquote below the description to indicate that this method of accessing an indefinite number of function arguments has been superseded by the use of the spread operator via rest parameters for ES6+ compatibility.
2022-01-06 11:39:04 -05:00
289 changed files with 20 additions and 10983 deletions

5
.gitmodules vendored
View File

@@ -1,5 +0,0 @@
[submodule "notes"]
path = notes
url = git@github.com:jbranchaud/til-notes-private.git
branch = main
ignore = all

12
.vimrc
View File

@@ -9,15 +9,3 @@ function! CountTILs()
endfunction
nnoremap <leader>c :call CountTILs()<cr>
augroup DisableMarkdownFormattingForTILReadme
autocmd!
autocmd BufRead ~/code/til/README.md autocmd! Format
augroup END
" local til_readme_group = vim.api.nvim_create_augroup('DisableMarkdownFormattingForTILReadme', { clear = true })
" vim.api.nvim_create_autocmd('BufRead', {
" command = 'autocmd! Format',
" group = til_readme_group,
" pattern = vim.fn.expand '~/code/til/README.md',
" })

354
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,79 +0,0 @@
version: '3'
vars:
NOTES_DIR: notes
NOTES_FILE: '{{.NOTES_DIR}}/NOTES.md'
EDITOR: '{{.EDITOR | default "nvim"}}'
tasks:
default:
desc: Show available commands
cmds:
- task --list
notes:
desc: Interactive picker for notes tasks
cmds:
- |
TASK=$(task --list | grep "^\* notes:" | sed 's/^\* notes://' | sed 's/\s\+/ - /' | fzf --prompt="Select notes task: " --height=40% --reverse) || true
if [ -n "$TASK" ]; then
TASK_NAME=$(echo "$TASK" | awk '{print $1}' | sed 's/:$//')
task notes:$TASK_NAME
fi
interactive: true
silent: true
notes:edit:
desc: All-in-one edit, commit, and push notes
cmds:
- task notes:open
- task notes:push
notes:sync:
desc: Sync latest changes from the notes submodule
cmds:
- git submodule update --remote {{.NOTES_DIR}}
- cd {{.NOTES_DIR}} && git checkout main
silent: false
notes:open:
desc: Opens NOTES.md (syncs latest changes first) in default editor
deps: [notes:sync]
cmds:
- $EDITOR {{.NOTES_FILE}}
interactive: true
notes:push:
desc: Commit and push changes to notes submodule
dir: '{{.NOTES_DIR}}'
cmds:
- git add NOTES.md
- git commit -m "Update notes - $(date '+%Y-%m-%d %H:%M')"
- git pull --rebase
- git push
status:
- git diff --exit-code NOTES.md
silent: false
notes:status:
desc: Check status of notes submodule
dir: '{{.NOTES_DIR}}'
cmds:
- git status
notes:pull:
desc: Pull latest changes (alias for sync)
cmds:
- task notes:sync
notes:diff:
desc: Show uncommitted changes in notes
dir: '{{.NOTES_DIR}}'
cmds:
- git diff NOTES.md
notes:log:
desc: Show recent commit history for notes
dir: '{{.NOTES_DIR}}'
cmds:
- git log --oneline -10

View File

@@ -1,56 +0,0 @@
# Generate Types For A Content Collection
Let's say I'm using Astro to publish posts via markdown. One of the best ways
to do that is as a _Content Collection_. The posts will live in `src/content`
probably under a `posts` directory. Plus a config file will define the
collection and specify validations for the frontmatter.
```typescript
// src/content/config.ts
import { defineCollection, z } from 'astro:content';
const postsCollection = defineCollection({
schema: z.object({
title: z.string(),
description: z.string(),
tags: z.array(z.string())
})
});
export const collections = {
'posts': postsCollection,
};
```
When I first add this to my project and get the collection, it won't know what
the types are.
```astro
---
import { getCollection } from "astro:content";
export async function getStaticPaths() {
const blogEntries = await getCollection("posts");
// ^^^ any
return blogEntries.map((entry) => ({
params: { slug: entry.slug },
props: { entry },
}));
}
---
```
I can tell Astro to generate a fresh set of types for things like content
collections by running the [`astro sync`
command](https://docs.astro.build/en/reference/cli-reference/#astro-sync).
```bash
$ npm run astro sync
```
This updates auto-generated files under the `.astro` directory which get pulled
in to your project's `env.d.ts` file.
All of these types will also be synced anytime I run `astro dev`, `astro
build`, or `astro check`.

View File

@@ -1,53 +0,0 @@
# Markdown Files Are Of Type MarkdownInstance
One of the things Astro excels at is rendering markdown files as HTML pages in
your site. And at some point we'll want to access a listing of those markdown
files in order to do something like display a list of them on an index page.
For that, we'll use
[`Astro.glob()`](https://docs.astro.build/en/reference/api-reference/#astroglob).
```typescript
---
const allPosts = await Astro.glob("../posts/*.md");
---
<ul>
{allPosts.map(post => {
return <Post title={post.frontmatter.title} slug={post.frontmatter.slug} />
})}
</ul>
```
This looks great, but we'll run into a type error on that first line:
`'allPosts' implicitly has type 'any'`. We need to declare the type
of these post instances that are being read-in by Astro.
These are of [type
`MarkdownInstance`](https://docs.astro.build/en/reference/api-reference/#markdown-files).
That's a generic though, so we need to tell it a bit more about the shape of a
post.
```typescript
import type { MarkdownInstance } from "astro";
export type BarePost = {
layout: string;
title: string;
slug: string;
tags: string[];
};
export type Post = MarkdownInstance<BarePost>;
```
We can then update that first line:
```typescript
const allPosts: Post[] = await Astro.glob("../posts/*.md");
```
Alternatively, you can specify the generic on `glob`:
```typescript
const allPosts = await Astro.glob<BarePost>("../posts/*.md");
```

View File

@@ -1,30 +0,0 @@
# AWS CLI Requires Groff Executable
I have the AWS CLI installed on this machine, but when I went to run certain
commands like `aws logs tail my_log_group` or even `aws logs tail help`, I'd
get the following error:
```
$ aws logs tail help
Could not find executable named 'groff'
```
This may only be an issue on MacOS Ventura for older versions of the CLI, per
[this PR](https://github.com/aws/aws-cli/pull/7413):
> The CLI's help commands are currently broken on macOS Ventura because Ventura has replaced groff with mandoc. This PR fixes the issue by falling back on mandoc if groff doesn't exist in the path.
There are two ways of dealing with this. One would be to install the missing
dependency, [`groff`](https://www.gnu.org/software/groff/):
```bash
$ brew install groff
```
The other is to update the AWS CLI to one that falls back to `mandoc`.
Depending on how you originally installed the AWS CLI, you can either [follow
their official install/upgrade
instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html),
`pip install --upgrade awscli`, or upgrade view homebrew (`brew upgrade
awscli`).

View File

@@ -1,46 +0,0 @@
# Find And Follow Server Logs
Let's say you are authenticated with the AWS CLI and have the appropriate
CloudWatch permissions. You have a few services running in production with
associated logs. One of those is a Rails server.
We want to run `aws logs tail`, but first we check how that command works.
```bash
$ aws logs tail help
```
We see a bunch of options, but the only required one is `group_name` ("The name
of the CloudWatch Logs group."). We may also notice the `--follow` flag which
we'll want to use as well to keep incoming logs flowing.
We need to determine the log group name for the Rails server. We can do that
from the CLI as well (no need to dig into the web UI).
```bash
$ aws logs describe-log-groups
{
"logGroups": [
{
"logGroupName": "/aws/codebuild/fc-rails-app-abcefg-123456",
"creationTime": 1739476650823,
"metricFilterCount": 0,
"arn": "arn:aws:logs:us-east-2:123456789:log-group:/aws/codebuild/fc-rails-app-abcefg-123456:*",
"storedBytes": 65617,
"logGroupClass": "STANDARD",
"logGroupArn": "arn:aws:logs:us-east-2:123456789:log-group:/aws/codebuild/fc-rails-app-abcefg-123456"
},
...
]
}
```
Because the group name is descriptive enough, we can find the log group we are
interested in: `/aws/codebuild/fc-rails-app-abcefg-123456`.
Now we know what we want to `tail`.
```bash
$ aws logs tail /aws/codebuild/fc-rails-app-abcefg-123456 --follow
```

View File

@@ -1,29 +0,0 @@
# List RDS Snapshots With Matching Identifier Prefix
I'm working on a script that manually creates a snapshot which it will then
restore to a temporary database that I can scrub and dump. The snapshots that
this script takes are _manual_ and they are named with identifiers that have a
defining prefix (`dev-snapshot-`). Besides the few snapshots created by this
script, there are tons of automated snapshots that RDS creates for
backup/recovery purposes.
I want to list any snapshots that have been created by the script. I can do
this with the `describe-db-snapshots` command and some filters.
```bash
$ aws rds describe-db-snapshots \
--snapshot-type manual \
--query "DBSnapshots[?starts_with(DBSnapshotIdentifier, 'dev-snapshot-')].DBSnapshotIdentifier" \
--no-cli-pager
[
"dev-snapshot-20250327-155355"
]
```
There are two key pieces. The `--snapshot-type manual` filter excludes all
those automated snapshots. The `--query` both filters to any snapshots whose
identifier `?starts_with` the prefix `dev-snapshot-` and then refines the
output to just the `DBSnapshotIdentifier` instead of the entire JSON object.
[source](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-snapshots.html)

View File

@@ -1,49 +0,0 @@
# Output CLI Results In Different Formats
The AWS CLI can output the results of commands in three different formats.
- Text
- JSON
- Table
The _default_ output format for my AWS CLI is currently configured to `json`.
```bash
$ aws configure get output
json
```
I can either accept the default or I can override it with the `--output` flag.
```bash
$ aws rds describe-db-instances \
--query 'DBInstances[*].Endpoint' \
--no-cli-pager
[
{
"Address": "fc-database-abcefg-ab1c23de.asdfgh4zxcvb.us-east-2.rds.amazonaws.com",
"Port": 5432,
"HostedZoneId": "A1BCDE2FG345H6"
}
]
$ aws rds describe-db-instances \
--query 'DBInstances[*].Endpoint' \
--no-cli-pager \
--output table
----------------------------------------------------------------------------------------------------
| DescribeDBInstances |
+-----------------------------------------------------------------------+-----------------+--------+
| Address | HostedZoneId | Port |
+-----------------------------------------------------------------------+-----------------+--------+
| fc-database-abcefg-ab1c23de.asdfgh4zxcvb.us-east-2.rds.amazonaws.com | A1BCDE2FG345H6 | 5432 |
+-----------------------------------------------------------------------+-----------------+--------+
$ aws rds describe-db-instances \
--query 'DBInstances[*].Endpoint' \
--no-cli-pager \
--output text
fc-database-abcefg-ab1c23de.asdfgh4zxcvb.us-east-2.rds.amazonaws.com A1BCDE2FG345H6 5432
```
[source](https://docs.aws.amazon.com/cli/v1/userguide/cli-usage-output-format.html)

View File

@@ -1,50 +0,0 @@
# SSH Into An ECS Container
In [Connect To Production Rails Console on AWS /
Flightcontrol](https://www.visualmode.dev/connect-to-production-rails-console-aws-flightcontrol),
I went into full detail about how to access `rails console` for a production
Rails app running in an ECS container.
A big part of that process was establishing an SSH connection to the ECS container.
To do that, I need to know my region, container ID, and task ID. I can get the
first two by listing my clusters and finding the cluster/container that houses
the Rails app.
```bash
$ aws ecs list-clusters
{
"clusterArns": [
"arn:aws:ecs:us-east-2:123:cluster/rails-app-abc123"
]
}
```
The region then is `us-east-2` and the container ID is `rails-app-abc123`.
I can use that to find the task ID:
```bash
$ aws ecs list-tasks --region us-east-2 --cluster rails-app-abc123
{
"taskArns": [
"arn:aws:ecs:us-east-2:123:task/rails-app-abc123/8526b3191d103bb1ff90c65a655ad004"
]
}
```
The task ID is the final portion of the URL:
`8526b3191d103bb1ff90c65a655ad004`.
Putting this all together I can SSH into the ECS container with a bash profile
like so:
```bash
$ aws ecs execute-command \
--region us-east-2 \
--cluster rails-app-abc123 \
--container rails-app-abc123 \
--task 8526b3191d103bb1ff90c65a655ad004 \
--interactive \
--command "/bin/bash"
```

View File

@@ -1,38 +0,0 @@
# Turn Off Output Pager For A Command
It is not uncommon for an AWS CLI command to return a ton of output. When that
happens, it is nice that the results end up in pager program (like `less`)
where you can search and review them, copy a value of interest, and then exit.
The pager prevents that wall of output from cluttering your terminal history.
However, sometimes I am running a command that I know is going to return a
small result. I'd rather have the results go to stdout where I can see them in
the terminal history rather than to an ephemeral pager.
For that situation I can tack on the `--no-cli-pager` flag.
```bash
$ aws rds describe-db-instances \
--query 'DBInstances[*].EngineVersion' \
--output json \
--no-cli-pager
[
"13.15",
"16.8"
]
```
Here I've asked the AWS CLI to tell me the engine versions of all my RDS
Postgres databases. Because I know the results are only going to include a
couple results for my couple of DBs, I'd like to skip the pager —
`--no-cli-pager`.
Though I think it is better to do this on a case by case basis, it is also
possible to turn off the pager via the CLI configuration file.
```bash
$ aws configure set cli_pager ""
```
[source](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside)

View File

@@ -1,37 +0,0 @@
# Use Specific AWS Profile With CLI
I have multiple AWS profiles authenticated with the AWS CLI. For some projects
I need to use the `default` one and for others I need to use the other.
First, I can list the available profiles like so:
```bash
$ aws configure list-profiles
default
dev-my-app
```
For one-off commands I can specify the profile for any AWS CLI command using
the `--profile` flag.
```bash
$ aws ecs list-clusters --profile josh-visualmode
```
However, I don't want to have to specify that flag every time when I'm working
on a specific project. Instead I can specify the profile with an environment
variable. The [`direnv`](https://direnv.net/) tool is a great way to do this on
a per-project / per-directory basis.
I can create or update the `.envrc` file (assuming I have `direnv` installed)
adding the following line (and re-allowing the changed file):
```
# .envrc
export AWS_PROFILE=dev-my-app
```
Now, any AWS command I issue from that directory or its subdirectories will use
that profile by default.
[source](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-using-profiles)

View File

@@ -1,40 +0,0 @@
# Clean Up Your Brew Installations
Over time as you upgrade brew-installed programs and make changes to your
`Brewfile`, your machine will have artifacts left behind that you no longer
need.
Periodically, it is good to clean things up.
First, you can get a summary of stale and outdated files that brew has
installed. Use the `--dry-run` flag.
```bash
$ brew cleanup --dry-run
```
If you feel good about what you see in the output, then give things a clean.
```bash
$ brew cleanup
```
Second, if you are using a `Brewfile` to manage what `brew` installs, then you
can instruct `brew` to uninstall any dependencies that aren't specified in that
file.
By default it operates as a dry run and the `--force` flag will be needed to
actually do the cleanup. And specify the filename if it doesn't match the
default of `Brewfile`.
```bash
$ brew bundle cleanup --file=Brewfile.personal
```
If the output looks good, then force the cleanup:
```bash
$ brew bundle cleanup --force --file=Brewfile.personal
```
See `brew cleanup --help` and `brew bundle --help` for more details.

View File

@@ -1,48 +0,0 @@
# Export List Of Everything Installed By Brew
If you're on a Mac using Homebrew to install various tools and utilities, there
may come a time when you want a listing of what is installed.
Run this command:
```bash
$ brew bundle dump
```
It may take 10 or so seconds. When it is done, you'll have a `Brewfile` in your
current directory.
Open it up and you'll see a bunch of lines like the following:
```
tap "heroku/brew"
tap "homebrew/bundle"
tap "homebrew/services"
tap "mongodb/brew"
tap "planetscale/tap"
tap "stripe/stripe-cli"
brew "asdf"
brew "bat"
brew "direnv"
brew "entr"
brew "exa"
brew "fd"
brew "ffmpeg"
brew "fx"
brew "fzf"
brew "gcc"
brew "gh"
brew "planetscale/tap/pscale"
brew "stripe/stripe-cli/stripe"
cask "1password-cli"
vscode "ms-playwright.playwright"
vscode "ms-vsliveshare.vsliveshare"
vscode "prisma.prisma"
```
Notice there are `tap`, `brew`, `cask`, and even `vscode` directives.
This is a file you could export and then run on a 'new' machine to install all
the programs you're used to having available on your current machine.
[source](https://danmunoz.com/setting-up-a-new-computer-with-homebrew/)

View File

@@ -1,27 +0,0 @@
# Install Go Packages In Brewfile
Typically my `Brewfile` is only full of `brew` and `cask` directives. That's
starting to change now that `brew` supports installing Go packages listed in the
`Brewfile`.
Use the `go` directive and the URL to the hosted Go package.
Here is an example of a `Brewfile` that includes a `cask`, `brew`, and `go`
directive.
```
# screen resolution tool
cask "betterdisplay"
# Mac keychain management, gpg key
brew "pinentry-mac"
# Sanitized production Postgres dumps
go "github.com/jackc/pg_partialcopy"
```
I've recently added the exact package from above to my [`dotfiles`
repo](https://github.com/jbranchaud/dotfiles/commit/e83e9d19504f0e2f95eba33123f907f999bf865e).
Here is the [PR to `brew`](https://github.com/Homebrew/brew/pull/20798) where
this functionality was added back in October of 2025.

View File

@@ -1,14 +0,0 @@
# Open Current Tab In New Window With Vimium
Sometime I have a busy Chrome window going with a bunch of tabs open for
various lines of work as well as a number of tabs that I've neglected to close.
I then open a new tab, find something useful, and realize I'm at a "branching
point". I'm about to start in on a specific chunk of work that will probably
involve opening several more tabs and switch back and forth between some
dashboards. I want to start all of this from a fresh slate -- or at least from
a fresh Chrome window.
With [Vimium](https://github.com/philc/vimium), I can hit `W` (`Shift-w`) to
have the current tab move from the current window to a new window. The original
window, minus that one tab, will be left as is so that I can go back to it as
needed.

View File

@@ -1,22 +0,0 @@
# Search Tabs With The Vimium Vomnibar
If you use Chrome like I do, then you eventually end up with several windows
with dozens if not 100+ tabs open. It can start to get tedius with that many
tabs to find and navigate to a given tab. Someone might suggest closing a few
dozen tabs as a solution to this predicament. However, Vimium offers a solution
that doesn't require I [_kill my
darlings_](https://en.wiktionary.org/wiki/kill_one%27s_darlings).
The Vomnibar, a Vimium-powered search bar, can be summoned with `T` to only
search through open tabs.
When I hit `T`, I see a text area (for refining the search) and then a bunch of
entries populate below that which I immediately recognize as many of those tabs
that I'm going to get back to one of these days.
To narrow down to the specific thing I'm looking for, I type something into the
input. Then I arrow to the result I'm looking for and hit enter. And I'm
transported to that tab.
If I don't like where I ended up, I can also go back to the tab I had been on
with `^`.

View File

@@ -1,18 +0,0 @@
# Monitor Usage Limits From CLI
When I first started using Claude Code enough to push the usage limits, I would
periodically switch over to the browser to check
`https://claude.ai/settings/usage` to see how close I was getting. That page
would tell me what percentage of my allotted usage I had consumed so far for the
current 5-hour session and then how long until that 5-hour usage window resets.
This can also be viewed directly in Claude Code for the CLI.
First, run the `/status` slash command and then _tab_ over to the _Usage_
section. There you will see the same details as in the web view.
I'm also learned, as I write this, that you can go directly to the _Usage_
section by typing the `/usage` slash command.
See [the docs](https://code.claude.com/docs/en/slash-commands) for a listing of
all slash commands.

View File

@@ -1,15 +0,0 @@
# Open Current Prompt In Default Editor
[Claude Code](https://www.claude.com/product/claude-code) gives you a single
line to write a prompt. You can write and write as much as you want, but it will
all be on that single line. And avoid accidentally hitting 'Enter' before you're
done.
I found myself wanting to space out my thoughts, create a code block as part of
a prompt, and generally have a scratch pad instead of just a text box. By
hitting `ctrl-g`, I can move the current prompt into my default editor (in my
case, `nvim`). From there I can continue to write, edit, and format with all the
affordances of an editor.
Once I'm done crafting the prompt, I can save (e.g. `:wq`) and Claude Code will
be primed with that text. I can then hit 'Enter' to let `claude` do its thing.

View File

@@ -1,51 +0,0 @@
# Add Line Numbers To A Code Block With Counter
The
[`counter`](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_counter_styles/Using_CSS_counters)
feature in CSS is a stateful feature that allows you to increment and display a
number based on elements' locations in the document. This feature is useful for
adding numbers to headings and lists, but it can also be used to add line
numbers to a code block.
We need to initialize the counter to start using it. This will give it a name
and default it to the value 0. We'll tie this to a `pre` tag which wraps our
lines of code.
```css {{ title: 'globals.css' }}
pre.shiki {
counter-reset: line-number;
}
```
Then we need to increment the counter for every line of code that appears in
the code block
```css {{ title: 'globals.css' }}
pre.shiki .line {
counter-increment: line-number;
}
```
Last, we need to display these incrementing `line-number` values _before_ each
line.
```css {{ title: 'globals.css }}
pre.shiki .line:not(:last-of-type)::before {
content: counter(line-number);
/*
* plus any styling and spacing of the numbers
*/
}
```
This essentially attaches an element to the front (`::before`) of the line
whose content is the current value of `line-number`. It is applied to all but
the last `.line` because [shiki](https://shiki.matsu.io/) includes an empty
`.line` at the end.
Here is [the real-world example of
this](https://github.com/pingdotgg/uploadthing/blob/4954c9956c141a25a5405991c34cc5ce8d990085/docs/src/styles/tailwind.css#L13-L37)
that I referenced for this post.
Note: the counter can be incremented, decremented, or even explicitly set to a
specific value.

View File

@@ -1,29 +0,0 @@
# Filter Blur Requires Expensive Calculation
I had [a
page](https://www.visualmode.dev/connect-to-production-rails-console-aws-flightcontrol)
on my blog that was experiencing some odd rendering behavior. The issue was
manifesting a couple ways.
- Resizing and scrolling were janky and causing entire page layers to re-render
causing the page to flash in and out.
- Sometimes entire layer chunks would fail to paint leaving a white block
missing from the page.
The issue was occurring with and without JavaScript turned on for a
statically-built page. I suspected that some aspect of the CSS was at fault.
I was going back and forth with Dillon Hafer about what the issue could be and
he wondered, "could it be the backdrop-blur class from tailwind?". I tried
removing that class and the responsiveness of the page immediately improved.
The [`filter:
blur`](https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/blur)
and [`backdrop-filter:
blur`](https://developer.mozilla.org/en-US/docs/Web/CSS/backdrop-filter) both
use an expensive [Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur)
calculation. One of these on a modern machine and browser probably won't have a
noticable impact. However, a bunch of them, as in the case of my page with a
recurring component, can have quite the performance hit.
[source](https://github.com/tailwindlabs/tailwindcss/issues/15256)

View File

@@ -1,29 +0,0 @@
# Prevent Invisible Elements From Being Clicked
I have a nav element that when clicked reveals a custom drop-down menu. It
reveals it using CSS transitions and transformations (`opacity` and `scale`).
When the nav element is clicked again, the reverse of these transformations is
applied to "hide" the menu. This gives a nice visual effect.
It only makes the menu invisible and doesn't actually make it go away. That
means that menu could be invisible, but hovering over the top of a button on
the screen. The button cannot be clicked now because the menu is intercepting
that [_pointer
event_](https://developer.mozilla.org/en-US/docs/Web/CSS/pointer-events).
The fix is to apply CSS (or a class) when the drop-down menu is closed that
tells it to ignore _pointer events_.
```css
.pointer-events-none {
pointer-events: none;
}
```
This is more of less what [the `pointer-events-none` TailwindCSS
utility](https://tailwindcss.com/docs/pointer-events) looks like.
This class is applied by default to the drop-down menu. Then when the nav item
is clicked, some JavaScript removes that class at the same moment that the menu
is visually appearing. When a menu item is selected or the menu otherwise
closed, it transitions away and the `pointer-events-none` class is reapplied.

View File

@@ -1,32 +0,0 @@
# Allow Cursor To Be Launched From CLI
It is nice to be able to open Cursor for a specific project directly from the
terminal like so:
```bash
$ cd ~/dev/my/project
$ cursor .
```
For the `cursor` launcher binary to be available like that, we have to find it
and add it to the path.
It is probably located in the `/Applications` folder and within that nested down
a couple directories is a `bin` directory that contains the binary we're looking
for.
```bash
ls /Applications/Cursor.app/Contents/Resources/app/bin
 bin/
├──  code*
├──  cursor*
└──  cursor-tunnel*
```
The `cursor` binary is what we want, so let's add that to our path. In my case,
I'll add this to my `~/.zshrc` file.
```bash
export PATH="/Applications/Cursor.app/Contents/Resources/app/bin:$PATH"
```

View File

@@ -1,28 +0,0 @@
# Default Rails Deploy Script On Hatchbox
I deployed a Rails app to [Hatchbox](https://hatchbox.io) recently. When
following along in the log during a deploy, I can see most of what is happening
as part of the deploy. Though it is too verbose to look through every line. I'd
rather see the contents of the deploy script.
I did quite a bit of digging around while SSH'd into my hatchbox server, but I
couldn't find if or where that file might be stored.
Instead, there is a [_Help Center_
article](https://hatchbox.relationkit.io/articles/55-what-is-the-default-rails-deploy-script)
where Chris Oliver shares what is in the script.
```bash
bundle install -j $(nproc)
yarn install
bundle exec rails assets:precompile
[[ -n "${CRON}" ]] && bundle exec rails db:migrate
```
It does a parallelized `bundle install`, then a `yarn install` (make sure your
project is using `yarn.lock`), Rails asset precompilation, and then if `CRON`
is set (Cron role is available by checking _Cron_ under _Server
Responsibilities_ for your Hatchbox server), it will run Rails migrations.
From app settings, the deploy script can be overridden, or pre- and post-deploy
steps can be added.

View File

@@ -1,44 +0,0 @@
# Hatchbox Exports Env Vars With asdf
When you add env vars through the [Hatchbox](https://hatchbox.io/) UI, they get
exported to the environment of the asdf-shimmed processes. This is handled by
the [`asdf-vars` plugin](https://github.com/excid3/asdf-vars). That plugin
looks for `.asdf-vars` in the current chain of directories.
I can see there are many `.asdf-vars` files:
```bash
$ find . -name ".asdf-vars" -type f
./.asdf-vars
./my-app/.asdf-vars
./my-app/releases/20250120195106/.asdf-vars
./my-app/releases/20250121041054/.asdf-vars
```
And it is the one in my app's directory that contains the env vars that I set
in the UI.
```bash
$ cat my-app/.asdf-vars
BUNDLE_WITHOUT=development:test
DATABASE_URL=postgresql://user_123:123456789012345@10.0.1.1/my_app_db
PORT=9000
RACK_ENV=production
RAILS_ENV=production
RAILS_LOG_TO_STDOUT=true
RAILS_MASTER_KEY=abc123
SECRET_KEY_BASE=abc123efg456
```
When I run a shimmed process like `ruby`, those env vars are loaded into the
process's environment.
```bash
$ cd my-app/current
$ which ruby
/home/deploy/.asdf/shims/ruby
$ ruby -e "puts ENV['DATABASE_URL']"
postgresql://user_123:123456789012345@10.0.1.1/my_app_db
```
[source](https://www.visualmode.dev/hatchbox-manages-env-vars-with-asdf)

View File

@@ -1,24 +0,0 @@
# Set Up Domain For Hatchbox Rails App
When we deploy a Rails app with [Hatchbox](https://hatchbox.io), we are given
an internal URL for publicly accessing our app. It is something like
`https://123abc.hatchboxapp.com`. That's useful as we are getting things up and
running, but eventually we want to point our own domain at the app.
The first step is to tell Hatchbox what domain we are going to use.
From our app's _Domain & SSL_ page we can enter a domain into the _Add A
Domain_ input. For instance, I have the
[visualmode.dev](https://visualmode.dev) domain and I want the
[still.visualmode.dev](https://still.visualmode.dev) subdomain pointing at my
Rails app. I submit the full name `still.visualmode.dev` and I get an _A
Record_ ipv4 address (e.g. `23.12.234.82`).
The second step is to configure a DNS record with our domain registrar.
From the DNS settings of our registrar (e.g. Cloudflare) we can add an _A
Record_ where we specify the name (e.g. `still`) and then include the ipv4
address provided by Hatchbox. We can save this and wait a minute for it to
propagate.
And soon enough we can visit our Rails app at the custom domain.

View File

@@ -1,28 +0,0 @@
# Check Postgres Version Running In Docker Container
I have a docker container that I'm using to run a PostgreSQL development
database on my local machine. It was a while ago when I set it up, so I can't
remember specifically which major version of PostgreSQL I am using.
I use `docker ps` to list the names of each container.
```bash
$ docker ps --format "{{.Names}}"
still-postgres-1
better_reads-postgres-1
```
I grab the one I am interested in. In this case, that is `still-postgres-1`.
Then I can execute a `select version()` statement with `psql` against the
container with that name like so:
```bash
$ docker exec still-postgres-1 psql -U postgres -c "select version()";
version
---------------------------------------------------------------------------------------------------------------------
PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
(1 row)
```
And there I have it. I'm running Postgres v16 in this container.

View File

@@ -1,22 +0,0 @@
# List Running Docker Containers
The `docker` CLI has a `ps` command that will list all running container by
default.
When I run it, I can see that I have a container running a Postgres database
and another running a MySQL database.
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba792e185734 postgres:latest "docker-entrypoint.s…" 12 days ago Up 12 days 0.0.0.0:9876->5432/tcp better_reads-postgres-1
7ca7c1e882e0 mysql:8.0 "docker-entrypoint.s…" 19 months ago Up 8 seconds 33060/tcp, 0.0.0.0:3309->3306/tcp some-app-db-1
```
It lists several pieces of info about the containers: the container id, the
image it is based off, when it was created, the running status, the port
configuration, and the name of the container
If I run `docker ps --help` I can see some additional options. One option is
the `--all` flag which will display all known docker container instead of just
the running ones.

View File

@@ -1,53 +0,0 @@
# Prevent Containers From Running On Startup
I have a bunch of docker containers managed by Docker Desktop. Some are related
to projects I'm actively working on. Whereas many others are inactive projects.
When I restart my machine, regardless of which containers I had running or
turned off, several of them are booted into a running state on startup. This is
becaue their restart policy is set to `always`. That's fine for the project I'm
actively working on, but the others I would like to be _off_ by default.
I need to update each of their restart policies from `always` to `no`.
First, I need to figure out their container IDs:
```bash
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb7b40aeba2d postgres:latest "docker-entrypoint.s…" 3 months ago Up 11 minutes 0.0.0.0:9875->5432/tcp still-postgres-1
eb9ab2213f2b postgres:latest "docker-entrypoint.s…" 3 months ago Exited (0) 11 minutes ago next-drizzle-migration-repro-app-postgres-1
ba792e185734 postgres:latest "docker-entrypoint.s…" 4 months ago Up 11 minutes 0.0.0.0:9876->5432/tcp better_reads-postgres-1
3139f9beae76 postgres:latest "docker-entrypoint.s…" 9 months ago Exited (128) 7 months ago basic-next-prisma-postgres-1
```
Referencing the `CONTAINER ID` and `NAMES` columns, I'm able to then inspect
each container and see the current `RestartPolicy`:
```bash
$ docker inspect eb9ab2213f2b | grep -A3 RestartPolicy
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
```
I can then update the `RestartPolicy` to be `no`:
```bash
$ docker update --restart no eb9ab2213f2b
```
Inpsecting that container again, I can see the updated policy:
```bash
$ docker inspect eb9ab2213f2b | grep -A3 RestartPolicy
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
```
Rinse and repeat for each of the offending containers.
[source](https://stackoverflow.com/questions/45423334/stopping-docker-containers-from-being-there-on-startup)

View File

@@ -1,42 +0,0 @@
# Run SQL Script Against Postgres Container
I've been using dockerized Postgres for local development with several projects
lately. This is typically with framework tooling (like Rails) where schema
migrations and query execution are handled by the tooling using the specified
connection parameters.
However, I was experimenting with and iterating on some Postgres functions
outside of any framework tooling. I needed a way to run the SQL script that
(re)creates the function via `psql` on the docker container.
With a local, non-containerized Postgres instance, I'd redirect the file to
`psql` like so:
```bash
$ psql -U postgres -d postgres < experimental-functions.sql
```
When I tried doing this with `docker exec` though, it was silently failing /
doing nothing. As far as I can tell, there was a mismatch with redirection
handling across the bounds of the container.
To get around this, I first copy the file into the `/tmp` directory on the
container:
```bash
$ docker cp experimental-functions.sql still-postgres-1:/tmp/experimental-functions.sql
```
Then the `psql` command that docker executes can be pointed directly at a
local-to-it SQL file.
```bash
$ docker exec still-postgres-1 psql \
-U postgres \
-d postgres \
-f /tmp/experimental-functions.sql
```
There are probably other ways to handle this, but I got into a nice rhythm with
this file full of `create or replace function ...` definitions where I could
modify, copy over, execute, run some SQL to verify, and repeat.

View File

@@ -1,4 +0,0 @@
{
"excludes": ["README.md"],
"plugins": ["https://plugins.dprint.dev/markdown-0.16.0.wasm"]
}

View File

@@ -1,48 +0,0 @@
# Create bigint Identity Column For Primary Key
Using the Drizzle ORM with Postgres, here is how we can create a table that
uses a [`bigint` data
type](https://orm.drizzle.team/docs/column-types/pg#bigint) as a primary key
[identity
column](https://www.postgresql.org/docs/current/ddl-identity-columns.html).
```typescript
import {
pgTable,
bigint,
text,
timestamp,
} from "drizzle-orm/pg-core";
// Users table
export const users = pgTable("users", {
id: bigint({ mode: 'bigint' }).primaryKey().generatedAlwaysAsIdentity(),
email: text("email").unique().notNull(),
name: text("name").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
});
```
There are a couple key pieces here:
1. We import `bigint` so that we can declare a column of that type.
2. We specify that it is a primary key with `.primaryKey()`.
3. We declare its default value as `generated always as identity` via
`.generatedAlwaysAsIdentity()`.
Note: you need to specify the `mode` for `bigint` or else you will see a
`TypeError: Cannot read properties of undefined (reading 'mode')` error.
If we run `npx drizzle-kit generate` the SQL migration file that gets
generated will contain something like this:
```sql
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "users" (
"id" bigint PRIMARY KEY GENERATED ALWAYS AS IDENTITY (sequence name "users_id_seq" INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1),
"email" text NOT NULL,
"name" text NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
CONSTRAINT "users_email_unique" UNIQUE("email")
);
```

View File

@@ -1,39 +0,0 @@
# Drizzle Tracks Migrations In A Log Table
When I generate (`npx drizzle-kit generate`) and apply (`npx drizzle-kit
migrate`) schema migrations against my database with Drizzle, there are SQL
files that get created and run.
How does Drizzle know which SQL files have been run and which haven't?
Like many SQL schema migration tools, it uses a table in the database to record
this metadata. Drizzle defaults to calling this table `__drizzle_migrations`
and puts it in the `drizzle` schema (which is like a database namespace).
Let's take a look at this table for a project with two migrations:
```sql
postgres> \d drizzle.__drizzle_migrations
Table "drizzle.__drizzle_migrations"
Column | Type | Collation | Nullable | Default
------------+---------+-----------+----------+----------------------------------------------------------
id | integer | | not null | nextval('drizzle.__drizzle_migrations_id_seq'::regclass)
hash | text | | not null |
created_at | bigint | | |
Indexes:
"__drizzle_migrations_pkey" PRIMARY KEY, btree (id)
postgres> select * from drizzle.__drizzle_migrations;
id | hash | created_at
----+------------------------------------------------------------------+---------------
1 | 8961353bf66f9b3fe1a715f6ea9d9ef2bc65697bb8a5c2569df939a61e72a318 | 1730219291288
2 | b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805 | 1730224013018
(2 rows)
```
Notice that Drizzle stores each migration record as [a SHA256 hash of the
migration
file](https://github.com/drizzle-team/drizzle-orm/blob/526996bd2ea20d5b1a0d65e743b47e23329d441c/drizzle-orm/src/migrator.ts#L52)
and a timestamp of when the migration was run.
[source](https://orm.drizzle.team/docs/drizzle-kit-migrate#applied-migrations-log-in-the-database)

View File

@@ -1,56 +0,0 @@
# Get Fields For Inserted Row
With Drizzle, we can insert a row with a set of values like so:
```typescript
await db
.insert(todoItems)
.values({
title,
userId,
description,
})
```
The result of this is `QueryResult<never>`. In other words, nothing useful is
coming back to us from the database.
Sometimes an insert is treated as a fire-and-forget (as long as it succeeds) or
since we know what data we are inserting, we don't need the database to
response. But what about values that are generated or computed by the database
-- such as an id from a sequence, timestamp columns that default to `now()`, or
generated columns.
To get all the fields of a freshly inserted row, we can tack on [the
`returning()` function](https://orm.drizzle.team/docs/insert#insert-returning)
(which likely adds something like [`returning
*`](https://www.postgresql.org/docs/current/dml-returning.html)) to the insert
query under the hood).
```typescript
await db
.insert(todoItems)
.values({
title,
userId,
description,
})
.returning()
```
This will have a return type of `Array<type todoItems>` which means that for
each inserted row we'll have all the fields (columns) for that row.
Alternatively, if we just need the generated ID for the new row(s), we can use
a partial return like so:
```typescript
await db
.insert(todoItems)
.values({
title,
userId,
description,
})
.returning({ id: todoItems.id })
```

View File

@@ -9,10 +9,10 @@ test runs. Most of these files are tracked (already checked in to the
repository). There are also many new files generated as part of the most recent
test run.
I want to stage the changes to files that are already tracked, but hold off on
doing anything with the new files.
I want to staging the changes to files that are already tracked, but hold off
on doing anything with the new files.
Running `git add spec/cassettes` won't do the trick because that will pull in
Running `git add spec/cassettes` won't do the track because that will pull in
everything. Running `git add --patch spec/cassettes` will take long and be
tedious. Instead what I want is the `-u` flag. It's short for _update_ which
means it will only stage already tracked files.

View File

@@ -1,43 +0,0 @@
# Better Diffs With Delta
A `git diff` from the command line is relatively bare bones. It shows you
removed lines and added lines that make up a changeset with the former text in
red and the later text in green. All other contextual text is in white. I've
found this to be good enough for most of the life of my git usage. I've been
missing out though.
By using [`delta`](https://github.com/dandavison/delta) as the pager and diff
filter for `git`, I get a bunch of nice visual improvements.
- Removals and additions are red and green shaded backgrounds
- Syntax highlighting for most languages
- Highlight specific part of a line that has changed
- Visual spacing and layout is clearer
To get all of this, all I had to do was install `delta`:
```bash
$ brew install delta
```
And then add `delta` as both the _core_ pager and `diffFilter` in my global git
config file:
```
[core]
pager = delta
[interactive]
singleKey = true # unrelated, but nice to have
diffFilter = delta --color-only
```
It's also recommended that you use `zdiff3` for your merge conflict style,
which I already had:
```
[merge]
conflictstyle = zdiff3
```
Once you have ths all configred, try a `git diff` or `git add --patch` and see
how much more visual info you get.

View File

@@ -1,28 +0,0 @@
# Check How A File Is Being Ignored
There are a few places on your machine where you can specify the files that git
should ignore. The most common is a repository's `.gitignore` file. The other
places those excludes are specified can be more obscure. Fortunately, `git
check-ignore` is a command that can show you specifically where.
For instance, let's check why my `notes.md` file is being ignored.
```bash
$ git check-ignore -v .DS_Store
.git/info/exclude:7:notes.md notes.md
```
At some point I added it to my repo's `.git/info/exclude` file. The `-v` flag
(_verbose_) when included with `check-ignore` tells me the file location.
How about these pesky `.DS_Store` directories? How are those being ignored?
```bash
$ git check-ignore -v .DS_Store
/Users/jbranchaud/.gitignore:3:.DS_Store .DS_Store
```
Ah yes, I had added it to my _global exclude file_ which I've configured in
`~/.gitconfig` to be the `~/.gitignore` file.
See `man git-check-ignore` for more details.

View File

@@ -1,38 +0,0 @@
# Check If A File Has Changed In A Script
If I'm at the command line and I want to check if a file has changed, I can run
`git diff` and see what has changed. If I want to be more specific, I can run
`git diff README.md` to see if there are changes to that specific file.
If I'm trying to do this check in a script though, I want the command to clearly
tell the script _Yes_ or _No_. Usually a script looks for an exit code to
determine what path to take. But as long as `git diff` runs successfully,
regardless of whether or not their are changes, it is going to have an
affirmative exit code of `0`.
This is why `git diff` offers the `--exit-code` flag.
> Make the program exit with codes similar to diff(1). That is, it exits with 1
> if there were differences and 0 means no differences.
With that in mind, we can wire up a script with `git diff` that takes different
paths depending on whether or not there are changes.
```bash
if ! git diff --exit-code README.md; then
echo "README.md has changes"
else
echo "README.md is clean"
fi
```
We can take this a step further and instead use the `--quiet` flag.
> Disable all output of the program. Implies --exit-code. Disables execution of
> external diff helpers whose exit code is not trusted
This exhibits the same behavior as `--exit-code` and goes the additional step of
silencing diff output and disabling execution of external diff helpers like
`delta`.
See `man git-diff` for more details.

View File

@@ -1,36 +0,0 @@
# Check If A File Is Under Version Control
The `git ls-files` command can be used with the `--error-unmatch` flag to check
if a file is under version control. It does this by checking if any of the
listed files appears on the _index_. If any does not, it is treated as an error.
In a project, I have a `README.md` that is under version control. And I have
`node_modules` that shouldn't be under version control (which is why they are
listed in my `.gitignore` file). I can check the README and a file somewhere in
`node_modules`.
```bash
git ls-files --error-unmatch README.md
README.md
git ls-files --error-unmatch node_modules/@ai-sdk/anthropic/CHANGELOG.md
error: pathspec 'node_modules/@ai-sdk/anthropic/CHANGELOG.md' did not match any file(s) known to git
Did you forget to 'git add'?
```
Notice the second command results in an error because of the untracked
`CHANGELOG.md` file in `node_modules`.
Here is another example of this at work while specifying multiple files:
```bash
git ls-files --error-unmatch README.md node_modules/@ai-sdk/anthropic/CHANGELOG.md package.json
README.md
package.json
error: pathspec 'node_modules/@ai-sdk/anthropic/CHANGELOG.md' did not match any file(s) known to git
Did you forget to 'git add'?
```
Each tracked file gets listed and then the untracked file results in an error.
See `man git-ls-files` for more details.

View File

@@ -1,35 +0,0 @@
# Cherry Pick Multiple Commits At Once
I've always thought of `git cherry-pick` as being a command that you can run
against a single commit by specifying the SHA of that commit. That's how I've
always used it.
The man page for `git-cherry-pick` plainly states:
> Given one or more existing commits, apply the change each one introduces,
> recording a new commit for each.
We can cherry pick multiple commits at once in a single command. They will be
applied one at a time in the order listed.
Here we can see an example of applying two commits to the current branch and
the accompanying output as they are auto-merged.
```bash
$ git cherry-pick 5206af5 6362f41
Auto-merging test/services/event_test.rb
[jb/my-feature-branch 961f3deb] Use the other testing syntax
Date: Fri May 2 10:50:14 2025 -0500
1 file changed, 7 insertions(+), 7 deletions(-)
Auto-merging test/services/event_test.rb
[jb/my-feature-branch b15835d0] Make other changes to the test
Date: Fri May 2 10:54:48 2025 -0500
1 file changed, 7 insertions(+), 7 deletions(-)
```
If the commits cannot be cleanly merged, then you may need to do some manual
resolution as they are applied. Or maybe you want to try including the
`-Xpatience` merge strategy.
See `man git-cherry-pick` for more details. Make sure to look at the _Examples_
section which contains much more advanced examples beyond what is shown above.

View File

@@ -1,26 +0,0 @@
# Clear Entries From Git Stash
I often stash changes as I'm moving between branches, rebasing, or pulling in
changes from the remote. Usually these are changes that I will want to restore
with a `git stash pop` in a few moments.
However, sometimes these stashed changes are abandoned to time.
When I run `git stash list` on an active project, I see that there are nine
entries in the list. When I do `git show stash@{0}` and `git show stash@{1}` to
see the changes that comprise the latest two entries, I don't see anything I
care about.
I can get rid of those individual entries with, say, `git stash drop
stash@{0}`.
But I'm pretty confident that I don't care about any of the nine entries in my
stash list, so I want to _clear_ out all of them. I can do that with:
```bash
$ git stash clear
```
Now when I run `git stash list`, I see nothing.
See `man git-stash` for more details.

View File

@@ -1,27 +0,0 @@
# Count All Files Of Specific Type Tracked By Git
I want to get a count of all the markdown files in my [TIL
repo](https://github.com/jbranchaud/til). Since all the files I care about are
tracked by `git`, I can use `git ls-files` to get a listing of all files. That
command on its own lists all files tracked by your git repository. Though there
are many other flags we can apply, that will do for my purposes.
By giving `git ls-files` a pattern to match against, I can turn up just, for
instance, markdown files (`*.md`). I can pipe that to `wc -l` to get a count
rather than exploding my terminal with a list of file names.
```bash
git ls-files '*.md' | wc -l
1503
```
That command includes `README.md` and `CONTRIBUTING.md`, but really I only want
to count the markdown files that constitute a TIL. Those all happen to be
nested under a single directory. So I can tweak the glob pattern like so:
```bash
git ls-files '*/*.md' | wc -l
1501
```
See `man git-ls-files` for more details.

View File

@@ -1,48 +0,0 @@
# Count Number Of Commits On A Branch
The `git rev-list` command will show all commits that fit the given revision
criteria. By adding in the `--count` flag, we get a count of the number of
commits that would have been displayed. Knowing this, we can get the count of
commits for the current branch like so:
```bash
$ git rev-list --count HEAD
4
```
This finds and counts commits from `HEAD` (usually the top of the current
branch) all the back in reverse chronological order to the beginning of the
branch (typically the beginning of the repository). This works exactly as
expected for a the `main` branch.
What about when we are on a feature branch though?
Let's say we've branched off `main` and made a few commits. And now we want the
count.
```bash
$ git rev-list --count HEAD
7
```
Unfortunately, that is counting up the commits on the feature branch but it
keeps counting all the way back to the beginning of the repo.
If we want a count of just the commits on the current branch, then we can
specify a range: from whatever `main` was when we branched to the `HEAD` of
this branch.
```bash
$ git rev-list --count HEAD
3
```
This is the same as saying, I want all commits on `HEAD`, but exclude (`^`) the
commits on `main`:
```bash
git rev-list --count HEAD ^main
3
```
See `man git-rev-list` for more details.

View File

@@ -1,32 +0,0 @@
# Exclude A Directory During A Command
Many of the git commands we use, such as `git add`, `git restore`, etc., target
files and paths relative to the current directory. This is typically exactly
what we want, to stage and unstage and so forth the files and directories in
front of us.
I recently ran into a situation where I needed to restore a small subset of
changes. At the same time, I had a massive number of auto-generated files
recording HTTP interactions (hundreds of files, modified on the working tree).
I wanted to run a `git restore`, but wading through all those HTTP recording
files was not feasible.
I needed to exclude those files. They all belonged to a `spec/cassettes`
directory. I could exclude them with a _pathspec_ magic signature pattern which
is used to alter and limit the paths in a git command.
A _pathspec_ magic signature is a special pattern made up of a `:` followed by
some signature declaring what the pattern means.
The `(exclude)`, `!`, and `^` magic signatures all mean the same thing —
exclude. So, we can exclude a directory from a `git restore` command like so:
```bash
$ git restore --patch -- . ':!spec/cassettes'
```
We've employed two pathspec patterns here. The first, `.`, scopes everything to
the current directory. The second, `':!spec/cassettes'` excludes everything in
the `spec/cassettes` directory.
See `man gitglossary` for more on _pathspecs_.

View File

@@ -1,26 +0,0 @@
# Files With Local Changes Cannot Be Removed
This is a nice quality-of-life feature in `git` that should help you avoid
accidentally discarding changes that won't be retrievable.
```bash
git rm .tool-versions
error: the following file has local modifications:
.tool-versions
(use --cached to keep the file, or -f to force removal)
```
My `.tool-versions` file has some local changes. I don't realize that and I go
to issue a `git rm` command on that file. Instead of quietly wiping out my
changes, `git` lets me know I'm doing something destructive (these local
changes won't be in the diff or the reflog).
I can force the removal if I know what I'm doing with the `-f` flag. Or I can
take the two step approach of calling `git restore` on that file and then `git
rm`.
The `--cached` flag is also interesting because it doesn't actually delete the
file from my file system, but it does stage the file deletion with `git`. That
means the file now shows up as one of my untracked files.
See `man git-rm` for more details.

View File

@@ -1,39 +0,0 @@
# Fix Whitespace Errors Throughout Branch Commits
Let's say we've been working on some changes to our repository on a branch.
We've made several commits. We are close to putting up a PR, but we want to
make sure everything is tidied up.
We run a check and see that there are some whitespace errors that should be
fixed.
```bash
$ git diff main --check
README.md:1: trailing whitespace.
+# git-playground
script.sh:9: trailing whitespace.
+
```
This post isn't able to show the highlighted whitespace errors, but we can see
the warnings above.
Rather than cluttering things with an additional commit that fixes these errors
or manually cleaning up each commit, we can ask `git` to fix it for us.
```bash
$ git rebase --whitespace=fix main
```
That will do a manual rebase of each commit addressing the whitespace errors.
We can run the error check again and see no output, which means we are good to
go.
```bash
$ git diff main --check
```
See the section on `--whitespace` in `man git-apply` for more details.
[source](https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration)

View File

@@ -1,25 +0,0 @@
# Get Latest Commit Timestamp For A File
The `git log` command can tell you all the commits that touched a file. That
can be narrowed down to the latest commit for that file with the `-1` flag. The
commit that it reports can then be further formatted to with the `--format`
flag.
The `%ai` format pattern gives the date the commit was authored in an ISO
8601-like format. The `%aI` (capital `I`) gives the date the commit was
authored strictly in the ISO 8601 format.
Here are examples of both side by side:
```bash
git log -1 --format=%ai -- README.md
2024-10-15 13:59:09 -0500
git log -1 --format=%aI -- README.md
2024-10-15T13:59:09-05:00
```
I made use of this in a script where I needed to get an idea of when various
files were most recently modified.
See `man git-log` and the `PRETTY FORMATS` section for more details.

View File

@@ -1,30 +0,0 @@
# Highlight Extra Whitespace In Diff Output
When running a `git diff` (or `git add --patch`) I'll sometimes come across
lines that don't have any visible changes. This is usually because some
whitespace characters were either added (on accident) or removed (often by a
autoformatter).
Depending on the `core.whitespace` config, you'll probably see at least some of
the whitespace errors that git provides. By default, git only highlights
whitespace errors on added (`new`) lines. However if some extra whitespace was
originally committed and is now being removed, it won't be highlighted on the
`old` line in the diff.
We can have git always highlight whitespace errors by setting
`wsErrorHighlight` to `all` in the global git config.
```bash
$ git config --global diff.wsErrorHighlight all
```
Which updates the global gitconfig file with the following line:
```
[diff]
wsErrorHighlight = all
```
The `all` option is a shorthand for `old,new,context`.
See `man git-diff` for more details.

View File

@@ -1,44 +0,0 @@
# Highlight Small Change On Single Line
Sometimes a change gets made on a single, long line of text in a Git tracked
file. If it is a small, subtle change, then it can be hard to pick out when
looking at the diff. A standard diff will show a green line of text stacked on
a red line of text with no more granular information.
There are two ways we can improve the diff output in these situations.
The first is built-in to git. It is the `--word-diff` flag which will visually
isolate the portions of the line that have changed.
```bash
git diff --word-diff README.md
```
Which will produce something like this:
```diff
A collection of concise write-ups on small things I learn [-day to day-]{+day-to-day+} across a
```
The outgoing part is wrapped in `[-...-]` and the incoming part is wrapped in
`{+...+}`.
The second (and my preference) is to use
[`delta`](https://github.com/dandavison/delta) as an external differ and pager
for git.
```bash
git -c core.pager=delta diff README.md
```
I cannot visually demonstrate the difference in a standard code block. So I'll
describe it. We see a red and green line stacked on each other, but with muted
background colors. Then the specific characters that are different stand out
because they are highlighted with brighter red and green. I [posted a visual
here](https://bsky.app/profile/jbranchaud.bsky.social/post/3ln245orlxs2j).
`delta` can also be added as a standard part of your config like I demonstrate
in [Better Diffs With Delta](git/better-diffs-with-delta.md).
h/t to [Dillon Hafer's post on
`--word-diff`](https://til.hashrocket.com/posts/t994rwt3fg-finds-diffs-in-long-line)

View File

@@ -1,49 +0,0 @@
# List All Files Added During Span Of Time
I wanted to get an idea of all the TIL posts I wrote during 2024. Every TIL I
write is under version control in a [git repo on
github](https://github.com/jbranchaud/til). That means git has all the info I
need to figure that out.
The `git diff` command is a good way at this problem. With the
`--diff-filter=A` flag I can restrict the results to just files that were
_Added_. And with `--name-only` I can cut all the other diff details out and
get just filenames.
But filenames added to which commits? We need to specify a ref range. There is
a ton of flexibility in how you define a ref, including [a date specification
suffix](https://git-scm.com/docs/gitrevisions#Documentation/gitrevisions.txt-emltrefnamegtltdategtemegemmasteryesterdayememHEAD5minutesagoem)
that points to the value of the ref at an earlier point in time.
So, how about from the beginning of 2024 to the beginning of 2025:
```
HEAD@{2024-01-01}..HEAD@{2025-01-01}
```
Putting that all together, we this command and potentially a big list of files.
```bash
$ git diff --diff-filter=A --name-only HEAD@{2024-01-01}..HEAD@{2025-01-01}
```
I wanted to restrict the results to just markdown files, so I added a filename
pattern.
```bash
$ git diff --diff-filter=A --name-only HEAD@{2024-01-01}..HEAD@{2025-01-01} -- "*.md"
```
I could even go a step further to see only the files added to a specific
directory.
```bash
$ git diff --diff-filter=A --name-only HEAD@{2024-01-01}..HEAD@{2025-01-01} -- "postgres/*.md"
```
As a final bonus, I can spit out the github URLs for all those files with a bit of `awk`.
```bash
$ git diff --diff-filter=A --name-only HEAD@{2024-01-01}..HEAD@{2025-01-01} -- "postgres/*.md" |
awk '{print "https://github.com/jbranchaud/til/blob/master/" $0}'
```

View File

@@ -1,28 +0,0 @@
# List All Git Aliases From gitconfig
Running the `git config --list` command will show all of the configuration
settings you have for `git` relative to your current location. Though most of
these setting probably live in `~/.gitconfig`, you may also have some locally
specified ones in `.git/config`. This will grab them all including any `alias`
entries.
We can narrow things down to just `alias` entries using the `--get-regexp` flag.
```bash
$ git config --get-regexp '^alias\.'
alias.ap add --patch
alias.authors shortlog -s -n -e
alias.co checkout
alias.st status
alias.put push origin HEAD
alias.fixup commit --fixup
alias.squash commit --squash
alias.doff reset HEAD^
alias.add-untracked !git status --porcelain | awk '/\?\?/{ print $2 }' | xargs git add
alias.reset-authors commit --amend --reset-author -CHEAD
```
I use `git doff` all the time on feature branches to "pop" the latest commmit
onto the working copy. I was trying to remember exactly what the `git doff`
command is and this was an easy way to check.

View File

@@ -1,33 +0,0 @@
# Override The Global Git Ignore File
One of the places that `git` looks when deciding whether to pay attention to or
ignore a file is in your global _ignore_ file. By default, `git` will look for
this file at `$XDG_CONFIG_HOME/git/ignore` or `$HOME/.config/git/ignore`.
I don't have `$XDG_CONFIG_HOME` set on my machine, so it will fall back to the
config directory under `$HOME`.
I may have to create the `git` directory and `ignore` file.
```bash
$ mkdir $HOME/.config/git
$ touch $HOME/.config/git/ignore
```
Then I can add file and directories to exclude to that `ignore` file just like
I would any other `.gitignore` file.
If I'd prefer for the global _ignore_ file to live somewhere else, I can
specify that location and filename in my `$HOME/.gitconfig` file.
```
[core]
excludesFile = ~/.gitignore
```
Setting this will override the default, meaning the default file mentioned
above will be ignored ("now you know how it feels, ignore file!"). In this
case, I'll need to create the `.gitignore` file in my home directory and add
any of my ignore rules.
[source](https://git-scm.com/docs/gitignore)

View File

@@ -1,34 +0,0 @@
# Reference Commits Earlier Than Reflog Remembers
While preparing some stats for a recent blog post on [A Decade of
TILs](https://www.visualmode.dev/a-decade-of-tils), I ran into an issue
referencing chunks of time further back than 2020.
```bash
git diff --diff-filter=A --name-only HEAD@{2016-02-06}..HEAD@{2017-02-06} -- "*.md"
warning: log for 'HEAD' only goes back to Sun, 20 Dec 2020 00:26:27 -0600
warning: log for 'HEAD' only goes back to Sun, 20 Dec 2020 00:26:27 -0600
```
This is because `HEAD@...` is a reference to the `reflog`. The `reflog` is a
local-only log of objects and activity in the repository. That date looks
suspiciously like the time that I got this specific machine and cloned the
repo.
In order to access this information, I need a different approach of finding
references that bound these points in time.
How about asking `rev-list` for the first commit it can find before the given
dates in 2017 and 2016 and then using those.
```bash
git rev-list -1 --before="2017-02-07 00:00" HEAD
17db6bc4468616786a8f597a10d252c24183d82e
git rev-list -1 --before="2016-02-07 00:00" HEAD
f1d3d1f796007662ff448d6ba0e3bbf38a2b858d
git diff --diff-filter=A --name-only f1d3d1f796007662ff448d6ba0e3bbf38a2b858d..17db6bc4468616786a8f597a10d252c24183d82e -- "*.md"
# git outputs a bunch of files ...
```

View File

@@ -1,20 +0,0 @@
# Restore File From One Branch To The Current
On one feature branch I have created some files and made changes to some
existing files as part of spiking a feature. Now I'm on a different branch
taking another shot at it. I want changes from one or two of the files. In the
past I've used `git-checkout` for this task. However, I believe this is one of
the use cases they had in mind when they added `git-restore`.
What I want to do is _restore_ the state of a file as it appears on some source
branch to my current branch. Here is what that looks like:
```bash
$ git restore --source=some-feature-branch app/models/contact.rb
```
Now when I check `git status` I'll see the state of that file on the
`some-feature-branch` branch overlayed on my current working copy. If the file
doesn't exist, it will be created.
See `man git-restore` for more details.

View File

@@ -1,56 +0,0 @@
# Set Up GPG Signing Key
I wanted to have that "Verified" icon start showing up next to my commits in
GitHub. To do that, I need to generate a GPG key, configure the secret key in
GitHub, and then configure the public signing key with my git config.
```bash
# generate a gpg key
$ gpg --full-generate-key
# Pick the following options when prompted
# - Choose "RSA and RSA" (Options 1)
# - Max out key size at 4096
# - Choose expiration date (e.g. 0 for no expiration)
# - Enter "Real name" and "Email"
(I matched those to what is in my global git config)
# - Set passphrase (I had 1password generate a 4-word passphrase)
```
It may take a few seconds to create.
I can see it was created by listing my GPG keys.
```bash
$ gpg --list-secret-keys --keyid-format=long
[keyboxd]
---------
sec rsa4096/1A8656918A8D016B 2025-10-19 [SC]
...
```
I'll need the `1A8656918A8D016B` portion of that response for the next command
and it is what I set as my public signing key in my git config.
First, though, I add the full key block to my GitHub profile which I can copy
like so:
```bash
$ gpg --armor --export 1A8656918A8D016B | pbcopy
```
And then I paste that as a new GPG Key on GitHub under _Settings_ -> _SSH and
GPG Keys_.
Last, I update my global git config with the signing key and the preference to
sign commits:
```bash
git config --global user.signingkey 1A8656918A8D016B
git config --global commit.gpgsign true
```
Without `commit.gpgsign`, I would have to specify the `-S` flag every time I
want to create a signed commit.
[source](https://git-scm.com/book/ms/v2/Git-Tools-Signing-Your-Work)

View File

@@ -1,26 +0,0 @@
# Show Summary Stats For Current Branch
When I push a branch up to GitHub as a PR, there is a part of the UI that shows
you how many lines you've added and removed for this branch. It bases that off
the target branch which is typically your `main` branch.
The `git diff` command can provide those same stats right in the terminal. The
key is to specify the `--shortstat` flag which tells `git` to exclude other diff
output and only show:
- Number of files changed
- Number of insertions
- Number of deletions
Here is the summary stats for a branch I'm working on:
```bash
git diff --shortstat main
8 files changed, 773 insertions(+), 25 deletions(-)
```
We have to be on our feature branch and then we point to the branch (or whatever
ref) we want to diff against. Since I want to know how my feature branch
compares to `main`, I specify that.
See `man git-diff` for more details.

View File

@@ -1,23 +0,0 @@
# Use External Diff Tool Like Difftastic
Assuming we already have a tool like `difft`
([difftastic](https://difftastic.wilfred.me.uk/introduction.html)) available on
our machine, we can use it as a diff viewer for the various `git` commands that
display a diff.
This requires a manual override which involve two pieces — an inline
configuration of `diff.external` specifying the binary of the external differ
and the `--ext-diff` flag which tells these commands to use the external diff
binary.
Here is what `git show` looks like with `difft`:
```bash
$ git -c diff.external=difft show --ext-diff
```
Without the `--ext-diff` flag, it will fallback to the default differ despite
`diff.external` being set.
See `man git-diff` and friends for the `--ext-diff` flag. See `man git-config`
for `diff.external`.

View File

@@ -1,41 +0,0 @@
# Use Labels To Block PR Merge
Let's say our GitHub project has custom tags for both `no merge` and `wip`
(_work in progress_). Whenever either of those labels has been applied to a PR,
we want there to be a failed check so as to block the merge. This is useful to
ensure automated tools (as well as someone not looking closely enough) don't
merge a PR that isn't _ready to go_.
This can be achieved with a basic GitHub Actions workflow that requires no
3rd-party actions. We can add the following as
`.github/workflows/block-labeled-prs.yml` in our project.
```yaml
name: Block Labeled PR Merges
on:
pull_request:
types: [labeled, unlabeled, opened, edited, synchronize]
jobs:
prevent-merge:
if: ${{ contains(github.event.*.labels.*.name, 'no merge') || contains(github.event.*.labels.*.name, 'wip') }}
name: Prevent Merging
runs-on: ubuntu-latest
steps:
- name: Check for label
run: |
echo "Pull request label prevents merging."
echo "Labels: ${{ join(github.event.*.labels.*.name, ', ') }}"
echo "Remove the blocking label(s) to skip this check."
exit 1
```
This workflow is run when a pull request is opened, when it is edited or
synchronized, and when a label change is made. The job `prevent-merge` sees if
any of the label names match `no merge` or `wip`. If so, we echo out some
details in the ubuntu container and then `exit 1` to fail the check.
Shoutout to [Jesse Squire's
implementation](https://www.jessesquires.com/blog/2021/08/24/useful-label-based-github-actions-workflows/#updated-21-march-2022)
which I've heavily borrowed from here.

View File

@@ -1,25 +0,0 @@
# Access Your GitHub Profile Photo
Let's say I have my [GitHub profile](https://github.com/jbranchaud) pulled up in
the browser.
```
https://github.com/jbranchaud
```
If I then add `.png` to the end of that in the URL bar:
```
https://github.com/jbranchaud.png
```
I'll be redirected to the URL where the full image file lives. In my case:
```
https://avatars.githubusercontent.com/u/694063?v=4
```
You can pull up yours `https://github.com/<username>.png` to access your profile
image.
[source](https://dev.to/10xlearner/how-to-get-the-profile-picture-of-a-github-account-1d82)

View File

@@ -1,19 +0,0 @@
# Open A PR To An Unforked Repo
Sometimes I will clone a repo to explore the source code or to look into a
potential bug. If my curiosity takes me far enough to make some changes, then I
jump through the hoops of creating a fork, reconfiguring branches, pushing to my
fork, and then opening the branch as a PR against the original repo.
The `gh` CLI allows me to avoid all that hoop-jumping. Directly from the cloned
repo I can use `gh` to create a new PR. It will prompt me to creat a fork. If I
accept, it will seamlessly create it and then open a PR from my fork to the
original.
```bash
$ gh pr create
```
This allows me to create the PR with a few prompts from the CLI. If you prefer,
you can include the `--web` flag to open the PR creation screen directly in the
browser.

View File

@@ -1,20 +0,0 @@
# Target Another Repo When Creating A PR
I have a [`dotfiles` repo](https://github.com/jbranchaud/dotfiles) that I forked
from [`dkarter/dotfiles`](https://github.com/dkarter/dotfiles). I'm adding a
bunch of my own customizations on a `main` branch while continually pulling in
and merging upstream changes.
The primary remote according to `gh` is `jbranchaud/dotfiles`. 98% of the time
that is what I want. However, I occasionally want to share some changes upstream
via a PR. Running `gh pr create` as is will create a PR against my fork. To
override this on a one-off basis, I can use the `--repo` flag.
```bash
$ gh pr create --repo dkarter/dotfiles
```
This will create a PR against `dkarter:master` from my branch (e.g.
[`jbranchaud:jb/fix-hardcoded-paths`](https://github.com/dkarter/dotfiles/pull/373)).
See `man gh-pr-create` for more details.

View File

@@ -1,38 +0,0 @@
# Tell gh What The Default Repo Is
I recently forked [dkarter/dotfiles](https://github.com/dkarter/dotfiles) as a
way of bootstrapping a robust dotfile config for a new machine that I could
start making customizations to. I'm maintaining a `my-dotfiles` branch and keep
things in sync with the original upstream repo.
When trying to go to *my* fork of the repo
([jbranchaud/dotfiles](https://github.com/jbranchaud/dotfiles)) in the web with
the `gh` CLI tool, I ran into a weird issue. It was instead opening up to
`dkarter/dotfiles`.
`gh` was under the wrong impression which repo should be considered the default.
To clarify things for `gh`, there is a command to set the default repo.
```bash
$ gh repo set-default jbranchaud/dotfiles
✓ Set jbranchaud/dotfiles as the default repository for the current directory
```
Now when I run `gh repo view --web`, it opens the browser to my fork of the
dotfiles.
But where does this setting live?
Opening this repo's `.git/config` file I can see a section for the `origin`
remote that includes a new line for `gh-resolved`. This being set to `base`
tells `gh` that this remote is the one to treat as the default repo.
```
[remote "origin"]
url = git@github.com:jbranchaud/dotfiles.git
fetch = +refs/heads/*:refs/remotes/origin/*
gh-resolved = base
```
See `gh repo set-default --help` for more details.

View File

@@ -15,10 +15,4 @@ $ godoc -http=:6060
and then visit `localhost:6060`.
Note: if you do not already have `godoc` installed, you can install it with:
```bash
$ go install golang.org/x/tools/cmd/godoc@latest
```
[source](http://www.andybritcliffe.com/post/44610795381/offline-go-lang-documentation)

View File

@@ -1,70 +0,0 @@
# Add A Method To A Struct
Given a `struct` in Go, we can attach a method to that struct. Put another way,
we can define a method whose receiver is that struct. Then with an instance of
that struct, we can call the method.
Let's say we are modeling a turtle that can move around a 2D grid. A turtle has
a heading (the direction it is headed) and a location (its current X,Y
coordinate).
```go
type Heading string
const (
UP Heading = "UP"
RIGHT Heading = "RIGHT"
DOWN Heading = "DOWN"
LEFT Heading = "LEFT"
)
type Turtle struct {
Direction Heading
X int
Y int
}
```
We can then add a method like so by specifying the receiver as the first part
of the declaration:
```go
func (turtle *Turtle) TurnRight() {
switch turtle.Direction {
case UP:
turtle.Direction = RIGHT
case RIGHT:
turtle.Direction = DOWN
case DOWN:
turtle.Direction = LEFT
case LEFT:
turtle.Direction = UP
}
}
```
The receiver is a pointer to a `Turtle`. The method is called `TurnRight`.
There are no parameters or return values.
Here are a sequence of calls to demonstrate how it works:
```go
func main() {
turtle := Turtle{UP, 5, 5}
fmt.Println("Turtle Direction:", turtle.Direction)
//=> Turtle Direction: UP
turtle.TurnRight()
fmt.Println("Turtle Direction:", turtle.Direction)
//=> Turtle Direction: RIGHT
turtle.TurnRight()
fmt.Println("Turtle Direction:", turtle.Direction)
//=> Turtle Direction: DOWN
}
```
[source](https://go.dev/tour/methods/1)

View File

@@ -1,63 +0,0 @@
# Basic Delve Debugging Session
When using [delve](https://github.com/go-delve/delve) to debug a Go program,
these are the series of things I usually find myself doing.
First, I start running the program with `dlv` including any arguments after a `--` (in my case, the `solve` subcommand and a filename).
```bash
$ dlv debug . -- solve samples/001.txt
```
`dlv` starts up and is ready to run my program from the beginning. I'll need to
set a couple breakpoints before continuing. I do this with the `break` command,
specifying the filename and line number.
```
(dlv) break main.go:528
Breakpoint 1 set at 0x10c1a5bea for main.traversePuzzleIterative() ./main.go:528
(dlv) break main.go:599
Breakpoint 2 set at 0x10c1a6dcc for main.traversePuzzleIterative() ./main.go:599
```
Now I can continue which will run the program until hitting a breakpoint.
```
(dlv) continue
> [Breakpoint 2] main.traversePuzzleIterative() ./main.go:599 (hits goroutine(1):1 total:1) (PC: 0x10c1a6dcc)
594: }
595: }
596:
597: topStackFrame := stack[len(stack)-1]
598: // if the current stack frame has more values, try the next
=> 599: if len(topStackFrame.PossibleValues) > 0 {
600: nextValue := topStackFrame.PossibleValues[0]
601: topStackFrame.PossibleValues = topStackFrame.PossibleValues[1:]
602: topStackFrame.CurrValue = nextValue
603:
604: // Undo the last placement and make a new one
```
I can see the context around the line we've stopped on. From here I can dig
into the current state of the program by looking at local variables (`locals`)
or printing out a specific value (`print someVar`). I can continue to step
through the program line by line with `next` or eventually run `continue` to
proceed to the next breakpoint.
```
(dlv) locals
diagnostics = main.Diagnostics {BacktrackCount: 0, NodeVisitCount: 1, ValidityCheckCount: 2,...+2 more}
stack = []main.StackData len: 1, cap: 1, [...]
emptyCellPositions = [][]int len: 3, cap: 4, [...]
emptyCellIndex = 1
status = "Invalid"
topStackFrame = main.StackData {RowIndex: 1, ColumnIndex: 7, PossibleValues: []int len: 8, cap: 8, [...],...+1 more}
(dlv) print topStackFrame
main.StackData {
RowIndex: 1,
ColumnIndex: 7,
PossibleValues: []int len: 8, cap: 8, [2,3,4,5,6,7,8,9],
CurrValue: 1,}
(dlv) next
> main.traversePuzzleIterative() ./main.go:600 (PC: 0x10c1a6dea)
```

View File

@@ -1,41 +0,0 @@
# Check If Cobra Flag Was Set
When using [Cobra](https://github.com/spf13/cobra) to define a CLI, we can
specify a flag for a command like so:
```go
var Seed int64
myCmd.PersistentFlags().Int64VarP(&Seed, "seed", "", -1, "set a seed")
```
This `--seed` flag has a _default_ of `-1`. If the flag isn't specified, then
when we access that flag's value, we'll get `-1`.
But how do we differentiate between the _default_ `-1` and someone passing `-1`
to the `--seed` flag when running the program?
In the command definition, we can look at the flags and see, by name, if
specific ones were changed by user input rather than being the defaults.
```go
myCommand := &cobra.Command{
// coommand setup ...
Run: func(cmd *cobra.Command, args []string) {
if cmd.Flags().Changed("seed") {
seed, err := cmd.Flags().GetInt64("seed")
if err != nil {
fmt.Println("Seed flag is missing from `cmdFlags()`")
os.Exit(1)
}
fmt.Printf("Seed was set to %d\n", seed)
} else {
fmt.Println("Seed was not set")
}
}
}
```
If we don't want to rely on the default and instead want to specify some other
behavior when the flag is not manually set by the user, we can detect that
scenario like this.

View File

@@ -1,51 +0,0 @@
# Combine Two Slices
The `append` function can be used to create a new slice with the contents of
the given slice and one or more items added to the end.
We can add one or more items like so:
```go
s1 := []int{1, 2, 3, 4}
s2 := append(s1, 5)
s3 := append(s2, 6, 7, 8)
fmt.Println(s1) //=> [1 2 3 4]
fmt.Println(s2) //=> [1 2 3 4 5]
fmt.Println(s3) //=> [1 2 3 4 5 6 7 8]
```
But what if we have a second slice instead of individual items? We could import
`slices` and use its `Concat` function. Or we can stick with `append` and
unpack that slice as a series of arguments into the second part of `append`
using `slice...`.
```go
s4 := append(s2, s1...)
fmt.Println(s4) //=> [1 2 3 4 5 1 2 3 4]
```
Here is the full example:
```go
package main
import (
"fmt"
)
func main() {
s1 := []int{1, 2, 3, 4}
s2 := append(s1, 5)
s3 := append(s2, 6, 7, 8)
fmt.Println(s1)
fmt.Println(s2)
fmt.Println(s3)
s4 := append(s2, s1...)
fmt.Println(s4)
}
```
[source](https://pkg.go.dev/builtin#append)

View File

@@ -1,29 +0,0 @@
# Configure Max String Print Length For Delve
During a [Delve](https://github.com/go-delve/delve) debugging session, we can
print out the value of a given variable with the `print` command. Similarly, we
can see the values of all local variables with the `locals` command.
Whenever Delve is printing out strings and slices, it will truncate what it
displays to 64 characters (or items) by default.
```go
(dlv) print diagnostics.Solutions[0]
"295743861\n431865972\n876192543\n387459216\n612387495\n549216738\n7635...+25 more"
```
This can be overridden by [changing the `config` of
`max-string-len`](https://github.com/derekparker/delve/blob/237c5026f40e38d2dd6f62a7362de7b25b00c1c7/Documentation/cli/expr.md?plain=1#L59)
to something longer. In my case here, all I need are about 90 characters to
display my full string, so run `config max-string-len 90` from the `dlv`
session.
```go
(dlv) config max-string-len 90
(dlv) print diagnostics.Solutions[0]
"295743861\n431865972\n876192543\n387459216\n612387495\n549216738\n763524189\n928671354\n154938627"
```
Now I can see the entire string instead of the truncated version.
[source](https://stackoverflow.com/a/52416264/535590)

View File

@@ -1,50 +0,0 @@
# Connect To A SQLite Database
Using the `database/sql` module and the `github.com/mattn/go-sqlite3` package,
we can connect to a SQLite database and run some queries. In my case, I have a
SQLite connection string exported to my environment, so I can access that with
`os.Getenv`. It's a local SQLite file, `./test.db`.
Calling `sql.Open`, I'm able to connect with a SQLite3 driver to the database
at that connection string. The `setupDatabase` function returns that database
connection pointer. Things like `Exec` and `QueryRow` can be called on `db`. I
also need to make sure I close the connection to the database with a `defer`.
Here is a full example of connecting to a local SQLite database and inserting a
record:
```go
package main
import (
"database/sql"
"fmt"
"os"
_ "github.com/mattn/go-sqlite3"
)
func setupDatabase() *sql.DB {
databaseString := os.Getenv("GOOSE_DBSTRING")
if len(databaseString) == 0 {
fmt.Println("Error retrieving `GOOSE_DBSTRING` from env")
os.Exit(1)
}
db, err := sql.Open("sqlite3", databaseString)
if err != nil {
fmt.Printf("Error opening database: %v\n", err)
os.Exit(1)
}
return db
}
func main() {
db := setupDatabase()
defer db.Close()
sql := `insert into users (name) values (?);`
db.Exec(sql, "Josh")
}
```

View File

@@ -1,44 +0,0 @@
# Create A Slice From An Array
Slices in Go are a flexible abstraction over arrays. We can create a slice from
an array with the `[n:m]` _slicing_ syntax. We specify the left and right
(exclusive) bounds of the array that we want to create the slice relative to.
We can exclude the lower bound which translates to the `0` index of the array.
We can exclude the left bound which translates to the end of the array. We can
even exclude both ends of the _slicing_ syntax which means creating a slice of
the entire array.
Here is an example of each of those:
```go
package main
import "fmt"
func main() {
arr := [...]string{
"taco",
"burrito",
"torta",
"enchilada",
"quesadilla",
"pozole",
}
firstTwo := arr[:2]
lastTwo := arr[len(arr)-2:]
all := arr[:]
fmt.Println("First two:", firstTwo)
// First two: [taco burrito]
fmt.Println("Last two:", lastTwo)
// Last two: [quesadilla pozole]
fmt.Println("All:", all)
// All: [taco burrito torta enchilada quesadilla pozole
}
```
[source](https://go.dev/blog/slices-intro#slices)

View File

@@ -1,59 +0,0 @@
# Detect If Stdin Comes From A Redirect
Reading lines of input from `stdin` is flexible. And we may need our program to
behave differently depending on where that input is coming from. For instance,
if data is redirected or piped to our program, we scan and process it directly.
Otherwise, we need to prompt the user to enter in specific info and go from
there.
We can detect whether [`os.Stdin`](https://pkg.go.dev/os#pkg-variables) is
being piped to, redirected to, or whether we should prompt the user by looking
at the file mode descriptor of
[`os.Stdin.Stat()`](https://pkg.go.dev/os#File.Stat).
```go
package main
import (
"bufio"
"fmt"
"os"
)
func main() {
file, err := os.Stdin.Stat()
if err != nil {
fmt.Printf("Error checking stdin: %v\n", err)
os.Exit(1)
}
fromTerminal := (file.Mode() & os.ModeCharDevice) != 0
fromAPipe := (file.Mode() & os.ModeNamedPipe) != 0
if fromTerminal {
fmt.Println("This is Char Device mode, let's prompt user for input")
termScanner := bufio.NewScanner(os.Stdin)
for termScanner.Scan() {
fmt.Printf("- %s\n", termScanner.Text())
break;
}
} else if fromAPipe {
fmt.Println("This is Named Pipe mode, contents piped in")
pipeScanner := bufio.NewScanner(os.Stdin)
for pipeScanner.Scan() {
fmt.Printf("- %s\n", pipeScanner.Text())
}
} else {
fmt.Println("This means the input was redirected")
redirectScanner := bufio.NewScanner(os.Stdin)
for redirectScanner.Scan() {
fmt.Printf("- %s\n", redirectScanner.Text())
}
}
}
```
If `os.ModeCharDevice` then we are connected to a character device, like the
terminal. We can see if input is being piped in by checking against
`os.ModeNamedPipe`. Otherwise, there are a variety of file modes and I'm
willing to assume we're dealing with a regular file redirect at that point.

View File

@@ -1,49 +0,0 @@
# Deterministically Seed A Random Number Generator
If you need a random number in Go, you can always reach for the various
functions in the `rand` package.
```go
package main
import (
"fmt"
"math/rand"
)
func main() {
for range 5 {
roll := rand.Intn(6) + 1
fmt.Printf("- %d\n", roll)
}
}
```
Each time I run that, I get a random set of values. Often in programming, we
want some control over the randomness. We want to _seed_ the randomness so that
it is deterministic. We want random, but the kind of random where we know how
we got there.
```go
package main
import (
"fmt"
"math/rand"
)
func main() {
seed := int64(123)
src := rand.NewSource(seed)
rng := rand.New(src)
for range 5 {
roll := rng.Intn(6) + 1
fmt.Printf("- %d\n", roll)
}
}
```
In this second snippet, we create a `Source` with a specific seed value that we
can use with a custom `Rand` struct. We can then deterministically get random
numbers from it.

View File

@@ -1,55 +0,0 @@
# Difference Between Slice And Pointer To Slice
Though a slice can be thought of and used as a flexible, variable-length
array-like data structure, it is important to understand that it is also a
special kind of pointer to an underlying array.
This matters when we a function receives a slice versus a pointer to a slice as
an argument, depending on what it is doing with that slice.
If the function is access or updating elements in the slice, there is no
difference. There is no meaningful difference between these two functions and
we might as well use the former.
```go
func replaceAtIndex(slice []string, index int, value string) {
slice[index] = value
}
func replaceAtIndexPtr(slice *[]string, index int, value string) {
(*slice)[index] = value
}
```
On the other hand, if the receiving function needs to append to or replace the
slice, then we need to pass a pointer to the slice. A direct slice argument
will result in only the function-local copy getting replaced.
```go
package main
import (
"fmt"
)
func main() {
s1 := []int{8, 6, 7, 9}
s2 := []int{8, 6, 7, 9}
addItem(s1, 11)
fmt.Printf("s1: %v\n", s1) //=> s1: [8 6 7 9]
addItemPtr(&s2, 11)
fmt.Printf("s2: %v\n", s2) //=> s2: [8 6 7 9 11]
}
func addItem(slice []int, value int) {
slice = append(slice, value)
}
func addItemPtr(slice *[]int, value int) {
(*slice) = append(*slice, value)
}
```
[source](https://go.dev/tour/moretypes/8)

View File

@@ -1,56 +0,0 @@
# Do Something N Times
With Go 1.23 there is a new for-range syntax that makes looping a bit easier
and more compact.
Instead of needing to set up our 3-part for-loop syntax, we can say we want to
do something `N` times with `for range N`.
```go
for range n {
// do something
}
```
Let's look at an actual, runnable example:
```go
package main
import "fmt"
import "math/rand"
import "time"
func main() {
rand.Seed(time.Now().UnixNano())
food := []string{"taco", "burrito", "torta", "enchilada", "tostada"}
for range 5 {
randomIndex := rand.Intn(len(food))
fmt.Println(food[randomIndex])
}
}
```
The output is random and might look something like this:
```bash
$ go run loop.go
taco
burrito
tostada
taco
enchilada
```
I appreciate this syntax addition because it feels very akin to Ruby's `#times`
method:
```ruby
5.times do
# do something
end
```
[source](https://eli.thegreenplace.net/2024/ranging-over-functions-in-go-123/)

View File

@@ -1,26 +0,0 @@
# Find Executables Installed By Go
When you install an executable using `go install`, it puts that executable in
the `bin` directory designated by the `GOBIN` environment variable. If that env
var isn't set, then it falls back to one of `$GOPATH/bin` or `$HOME/go/bin`.
When I run `go help install`, it tells me as much:
```
Executables are installed in the directory named by the GOBIN environment
variable, which defaults to $GOPATH/bin or $HOME/go/bin if the GOPATH
environment variable is not set.
```
So, if I am to install something like [`tern`](https://github.com/jackc/tern),
```bash
$ go install github.com/jackc/tern/v2@latest
```
it is going to place that binary in `~/go/bin` for me.
```bash
$ which tern
/Users/jbranchaud/go/bin/tern
```

View File

@@ -1,51 +0,0 @@
# Format Date And Time With Time Constants
The Go [`time` package](https://pkg.go.dev/time) has a [`Format`
function](https://pkg.go.dev/time#Time.Format) for displaying the parts of a
date and time in standard and custom ways. It works a bit different than you
might be used to from other languages. Rather than using `strftime` identifiers
like in this string `"%B %d, %Y"`, there is a canonical date that is used as a
reference point.
That canonical date is from Janary 2nd, 2006. That was a Monday. It was at 5
seconds after 3:04PM. The Unix format of it looks like `"Mon Jan _2 15:04:05
MST 2006"`.
```
package main
import (
"fmt"
"time"
)
func main() {
// This specific time pulled from `time.Format` docs
t, _ := time.Parse(time.UnixDate, "Wed Feb 25 11:06:39 PST 2015")
// Reference date and time:
// "Mon Jan _2 15:04:05 MST 2006"
strf1 := t.Format("|2006|02|01|03:04:05|Day: Mon|")
fmt.Println("strf1:", strf1)
// strf1: |2015|25|02|11:06:39|Day: Wed|
strf2 := t.Format(time.DateTime)
strf3 := t.Format(time.RubyDate)
strf4 := t.Format(time.Kitchen)
fmt.Println("DateTime:", strf2) // DateTime: 2015-02-25 11:06:39
fmt.Println("RubyDate:", strf3) // RubyDate: Wed Feb 25 11:06:39 +0000 2015
fmt.Println("Kitchen:", strf4) // Kitchen: 11:06AM
}
```
Though there are a [variety of useful formatting
constants](https://pkg.go.dev/time#pkg-constants) already available like
`DateTime`, `RubyDate`, `Kitchen`, etc., we can also define our own formatting
string by using the reference values for each part of a date and time.
If you want to reference the year, whether as `YYYY` or `YY`, it is always
going to be a form of `2006`, so `2006` or `06` respectively. Even though the
above time variable is in February, our format strings will always need to use
one of `Jan`, `January`, `01` or `1`.

View File

@@ -1,39 +0,0 @@
# Parse A String Into Individual Fields
Let's say you're reading in data from a file or otherwise dealing with an
arbitrary string of data. If that string has a series of values separated by
whitespace, you can parse it into individual fields with
[`strings.Fields`](https://pkg.go.dev/strings#Fields).
```go
import (
"fmt"
"strings"
)
func main() {
data := "3 5 2 6 7 1 9"
fields := strings.Fields(data)
fmt.Printf("Fields: %v", fields)
// [3 5 2 6 7 1 9]
}
```
Here is another example where we can see that `strings.Fields` deals with
multiple whitespace and surrounding whitespace:
```go
import (
"fmt"
"strings"
)
func main() {
data := " go java c++ rust "
fields := strings.Fields(data)
fmt.Printf("%v", fields)
// [go java c++ rust]
}
```

View File

@@ -1,65 +0,0 @@
# Parse Flags From CLI Arguments
Though we can grab the arguments to a Go program from `os.Args`, it requires
some manual parsing. With the built-in `flag` package, we can declare specific
flags our program accepts, by type. When we parse them, they will be separated
out from the rest of the positional arguments.
Here is an example of the program that accepts a boolean `debug` flag. This
will work with either `-debug` or `--debug`.
```go
package main
import (
"flag"
"fmt"
"os"
)
func main() {
var debug bool
flag.BoolVar(&debug, "debug", false, "turns on debug mode, extra logging")
flag.Parse()
positionalArgs := flag.Args()
if len(positionalArgs) < 1 {
fmt.Println("Please specify which part to run: 1 or 2")
os.Exit(1)
}
if debug {
fmt.Println("We are in debug mode...")
fmt.Println("Received the following argument:", positionalArgs[0])
}
// ...
}
```
We can run the program in debug mode like so:
```bash
$ go run . --debug 123
We are in debug mode...
Received the following argument: 123
```
We can also take advantage of the `help` flag that we get for free:
```bash
$ go run . --help
Usage of /var/folders/62/lx9pcjbs1zbd83zg6twwym2r0000gn/T/go-build3212087168/b001/exe/test:
-debug
turns on debug mode, extra logging
```
Note: any recognized flags need to come before any of the position arguments.
The `debug` flag won't be picked up if we run the program like this:
```bash
$ go run . 123 --debug
```
[source](https://pkg.go.dev/flag)

View File

@@ -1,65 +0,0 @@
# Pass A Struct To A Function
Go operates as _pass-by-value_ which means that when we pass a struct to a
function, the receiving function gets a copy of the struct. Two things worth
noticing about that are 1) an extra memory allocation happens when calling the
function and 2) altering the struct does not affect the original in the calling
context.
On the other hand, we can have a function that takes a pointer to a struct.
When we call that function, we have a reference to the memory location of the
struct instead of a copy of the struct. That means no additional allocation and
modifications to the dereferenced struct are modifications to the original in
the calling context.
Here is an example that demonstrates both of these. Notice the printed output
that is included in comments at the end which shows memory locations and
contents of the struct at various points.
```go
package main
import "fmt"
type Order struct {
Item string
Quantity int
DineIn bool
}
func main() {
order := Order{Item: "taco", Quantity: 3, DineIn: true}
fmt.Println("Order:", order)
fmt.Printf("main - Loc: %p\n", &order)
doubledOrder := doubleOrder(order)
fmt.Println("Double Order:", doubledOrder)
fmt.Println("Original Order:", order)
doubleOrderPtr(&order)
fmt.Println("Double Order Ptr:", order)
}
func doubleOrder(order Order) Order {
fmt.Printf("doubleOrder - Loc: %p\n", &order)
order.Quantity *= 2
return order
}
func doubleOrderPtr(order *Order) {
fmt.Printf("doubleOrderPtr - Loc: %p\n", order)
(*order).Quantity *= 2
}
// Order: {taco 3 true}
// main - Loc: 0xc0000b4000
// doubleOrder - Loc: 0xc0000b4040
// Double Order: {taco 6 true}
// Original Order: {taco 3 true}
// doubleOrderPtr - Loc: 0xc0000b4000
// Double Order Ptr: {taco 6 true}
```

View File

@@ -1,32 +0,0 @@
# Produce The Zero Value For A Generic Type
While writing a _pop_ function that would work with slices of a generic type, I
ran into the issue of needing to produce a zero value of type `T` when
returning early for an empty slice.
The way to arbitrarily get the zero value of a generic in Go is with `*new(T)`.
I was able to use this in my `Pop` function like so:
```go
func Pop[T any](slice []T) (T, error) {
if len(slice) == 0 {
return *new(T), fmt.Errorf("cannot pop an empty slice")
}
lastItem := slice[len(slice)-1]
slice = slice[:len(slice)-1]
return lastItem, nil
}
```
If this is happening in multiple functions and we want a more self-documenting
approach, we can pull it out into a function `zero`:
```go
func zero[T any]() T {
return *new(T)
}
```

View File

@@ -1,39 +0,0 @@
# Redirect File To Stdin During Delve Debug
I have a go program that accepts input from stdin. The way I've been running
the program as I develop it is to redirect the output of some sample files to
the program.
```bash
$ go run . < sample/001.txt
```
When I then go to debug this program with
[Delve](https://github.com/go-delve/delve), I'd still like to be able to
redirect a file into the program to reproduce the exact behavior I'm seeing.
The following won't work:
```bash
$ dlv debug . < samples/001.txt
Stdin is not a terminal, use '-r' to specify redirects for the target process or --allow-non-terminal-interactive=true if you really want to specify a redirect for Delve
```
Fortunately, `dlv` sees what I'm trying to do and makes a recommendation. The
`-r` flag can be used to specify redirects for the target process. The [`dlv`
redirect
docs](https://github.com/go-delve/delve/blob/master/Documentation/usage/dlv_redirect.md)
explain that `-r` can be passed a `source:destination`. The `source` is `stdin`
by default, but can also be `stdout` and `stderr`.
I can redirect my file into the debugging session of my program like so:
```bash
$ dlv debug . -r stdin:samples/001.txt
```
Or even more succinctly:
```bash
$ dlv debug . -r samples/001.txt
```

View File

@@ -1,52 +0,0 @@
# Sort Slice In Ascending Or Descending Order
The [`slices.Sort`](https://pkg.go.dev/slices#Sort) function defaults to
sorting a slice in ascending order. If we want to control the sort order, we
have to do a little more work. We can reach for the
[`slices.SortFunc`](https://pkg.go.dev/slices#SortFunc) function. This allows
us to define a sort function and in that function we can control whether the
sort order is ascending or descending.
Here I've defined `SortItems` which takes a list of items constrained by the
[`cmp.Ordered`](https://pkg.go.dev/cmp#Ordered) interface (so things like
`int`, `string`, `uint64`, etc.). It takes a direction (`ASC` or `DESC`) as a
second argument. It does the directional sort based on that second argument.
```go
import (
"cmp"
"fmt"
"slices"
)
type Direction int
const (
ASC Direction = iota
DESC
)
func SortItems[T cmp.Ordered](items []T, dir Direction) {
slices.SortFunc(items, func(i, j T) int {
if dir == ASC {
return cmp.Compare(i, j)
} else if dir == DESC {
return cmp.Compare(j, i)
} else {
panic(fmt.Sprintf("Unrecognized sort direction: %d", dir))
}
})
}
// items := []int{3,2,8,1}
// SortItems(items, ASC)
// // items => [1,2,3,8]
// SortItems(items, DESC)
// // items => [8,3,2,1]
```
Because `slices.SortFunc` expects a negative value, zero, or positive value to
determine the sort order, we use
[`cmp.Compare`](https://pkg.go.dev/cmp#Compare) which returns those kinds of
values. For ascending, we compare `i` to `j`. For descending, we swap them,
comparing `j` to `i` to get the reverse sort order.

View File

@@ -1,58 +0,0 @@
# Write A Custom Scan Function For File IO
By default a [`bufio.Scanner`](https://pkg.go.dev/bufio#Scanner) will scan
input line-by-line. In other words, splitting on newlines such that each
iteration will emit everything up to the next newline character.
We can write our own `SplitFunc` and override the default one by calling
`scanner.Split` with it. Our custom scan function needs to match the type
signature of [`SplitFunc`](https://pkg.go.dev/bufio#SplitFunc).
Here is a custom one that emits each individual character but omits the
newlines.
```go
func ScanChar(data []byte, atEOF bool) (int, []byte, error) {
if atEOF || len(data) == 0 {
return 0, nil, nil
}
start := 0
for start < len(data) {
if !utf8.FullRune(data[start:]) {
return 0, nil, nil
}
r, size := utf8.DecodeRune(data[start:])
if r == utf8.RuneError {
return 0, nil, fmt.Errorf("invalid UTF-8 encoding")
}
if r != '\n' {
return start + size, data[start:start+size], nil
}
// found a \n, advance the start position
start += size
}
return start, nil, nil
}
```
We can then use thi `ScanChar` function with a `bufio.Scanner` like so:
```go
func ReadFileByCharacter(file io.Reader) {
scanner := bufio.NewScanner(file)
// override default SplitFunc
scanner.Split(scanChar)
for scanner.Scan() {
char := scanner.Text()
fmt.Printf("- %s\n", char)
}
}
```

View File

@@ -1,29 +0,0 @@
# Connect To A Database By Color
All of your PostgreSQL databases in Heroku are given attachment names that use
a random color. This might be _pink_, _brown_, _cobalt_, etc. And the
attachment names then look like `HEROKU_POSTGRESQL_PINK`,
`HEROKU_POSTGRESQL_BROWN`, `HEROKU_POSTGRESQL_COBALT`, etc.
We can connect to a Heroku-managed PostgreSQL instance from the command-line
like so:
```bash
$ heroku pg:psql --app my-app
```
This is going to connect to the _default_ database which is the one with the
`DATABASE_URL` attachment.
There are lots of instances where we may have other databases besides the
primary (e.g. let's say we have a read replica follower). If we want to connect
to that one, we can do so by _color_.
If that database's attachment is `HEROKU_POSTGRESQL_IVORY`, then we'd connect
to it like so:
```bash
$ heroku pg:psql ivory --app my-app
```
[source](https://devcenter.heroku.com/articles/managing-heroku-postgres-using-cli#pg-psql)

View File

@@ -1,17 +0,0 @@
# Open Dashboard For Specific Add-On
The number of times I've needed to check the papertrail logs for my
Heroku-hosted Rails app is a lot. I open a browser tab, go through several
layers of navigation to get to my app's dashboard, and then click the
papertrail link under _Add-ons_.
There is a much quicker way using the Heroku CLI.
```bash
$ heroku addons:open papertrail -a my-app-name
Opening https://addons-sso.heroku.com/apps/abc123/addons/efg456...
```
It sends you to an add-ons SSO link in the browser which authenticates you and
drops you into the dashboard for that specific add-on. You just need to specify
the add-on name and the app name.

View File

@@ -1,34 +0,0 @@
# Specify Default Team And App For Project
Typically when you run commands with the Heroku CLI you'll need to specify the
name of the app on Heroku you're targeting with the `--app` flag. However, to
first see the names of the apps you may want to run `heroku apps` (or `heroku
list`). That will list the apps for your default team.
If you need to see apps for a different team (i.e. organization), you'll need to
specify that team either with the `--team` flag or by setting that as an
environment variable.
Here I do the latter in an `.envrc` file:
```
# Heroku
export HEROKU_ORGANIZATION=visualmode
```
Once that is set and the environment reloaded, running `heroku apps` will show
the apps specific to that team on Heroku.
Similarly, if you want to set a default app for your project so that you don't
have to always specify the `--app` flag, you can update your `.envrc`
accordingly.
```
# Heroku
export HEROKU_ORGANIZATION=visualmode
export HEROKU_APP=my-app
```
I had a hard time finding official documentation for this which is why I'm
writing this up here. I've manually verified this works with my own team and
app.

View File

@@ -1,39 +0,0 @@
# Allow Number Input To Accept Decimal Values
Here is a number input element:
```html
<input type="number" id="amount" required class="border" />
```
This renders an empty number input box with up and down arrows which will, by
default, increment or decrement the value by **1**.
Of course, I can manually edit the input typing in a value like `1.25`.
However, when I submit that via an HTML form, the submission will be prevented
and the browser will display a validation error.
> Please enter a valid value. The two nearest valid values are 1 and 2.
If I want to be able to input a decimal value like this, I need to change the
`step` value. It defaults to `1`, but I could change it to `2`, `10`, or in
this case to `0.01`.
```html
<input type="number" step="0.01" id="amount" required class="border" />
```
Notice now that as you click the up and down arrows, the value is incremented
and decremented by **0.01** at a time.
If I want to maintain the step value of `1` while allowing decimal values, I
can instead set the `step` value to be `any`.
```html
<input type="number" step="any" id="amount" required class="border" />
```
See the [MDN docs on number
inputs](https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/input/number)
for more details.

View File

@@ -1,28 +0,0 @@
# Disclose Additional Details
You can add extra details to an HTML page that are only disclosed if the user
chooses to disclose them. To do that, we use the `<details>` tag. This tag
needs to have a `<summary>` tag nested within it. Anything else nested within
`<details>` will be what is disclosed when it is toggled open. The `<summary>`
is what is displayed when it is not open.
Here is a `<detail>` block I recently added to [Ruby Operator
Lookup](https://www.visualmode.dev/ruby-operators).
```html
<details className="pt-2 pb-6">
<summary>What is this thing?</summary>
<p className="pl-3 pt-2 text-gray-700 text-sm">
Ruby is an expressive, versatile, and flexible dynamic programming language. That means there are all kinds of syntax features, operators, and symbols we can encounter that might look unfamiliar and are hard to look up. Ruby Operator Lookup is a directory of all these language features.
</p>
<p className="pl-3 pt-2 text-gray-700 text-sm">
Use the search bar to narrow down the results. Then click on a button for the operator or symbol you want to explore further.
</p>
</details>
```
On page load, the only thing we see is "What is this thing?" with a triangle
symbol next to it. If we click the summary, then the entire details block
(those two `<p>` tags) are disclosed.
[source](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)

View File

@@ -1,40 +0,0 @@
# Add Styled Alerts To GitHub Markdown Documents
The GFM (GitHub Flavored Markdown) variant of markdown adds some nice features
to our GitHub-rendered markdown documents.
One such feature that has been around for a couple years, but which I only just
learned about, are these styled alerts. There are five of them each with a
different color and icon to help convey meaning.
```
> [!NOTE]
> Useful information that users should know, even when skimming content.
> [!TIP]
> Helpful advice for doing things better or more easily.
> [!IMPORTANT]
> Key information users need to know to achieve their goal.
> [!WARNING]
> Urgent info that needs immediate user attention to avoid problems.
> [!CAUTION]
> Advises about risks or negative outcomes of certain actions.
```
I just added the following to the top of one of my project's READMEs to help me
remember that it is not under active development.
```
> [!WARNING]
> This repo is not under active development, you might be looking for
> [til-visualmode-dev](https://github.com/jbranchaud/til-visualmode-dev).
```
Visit the GitHub docs for
[Alerts](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#alerts)
to see examples of how these render.
[source](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#alerts)

View File

@@ -1,21 +0,0 @@
# Analyze Your Website Performance
The [PageSpeed Insights](https://pagespeed.web.dev/) tool from Google is a
great way to quickly get actionable insights about where to improve your
website and app's _Performance_, _Accessibility_, and _SEO_.
To see how your public site or app does, grab its URL and analyze it at
[PageSpeed Insights](https://pagespeed.web.dev/).
It will take a minute to run on either Mobile or Desktop (make sure to check
both) and then will output four headline numbers (out of 100) for each of the
categories.
You can then dig in to each category to see what recommendations they make for
improving your score.
This can also be run directly from Chrome devtools which is useful if you want
to see how a locally running site is doing. You can run the analysis from the
_Lighthouse_ tab of devtools. Note: if the _Performance_ score looks bad, it
might be that you are running a non-optimized dev server that isn't reflective
of how your site would do in production.

View File

@@ -1,29 +0,0 @@
# Digraph Unicode Characters Have a Titlecase
Coming from primarily being exposed to the US American alphabet, I'm familiar
with characters that I type into the computer having one of two cases. Either
it is lowercase by default (`c`) or I can hit the shift key to produce the
uppercase version (`C`).
Unicode, which has broad support for character encoding across most languages,
has a couple characters that are called _digraphs_. These are single code
points, but look like they are made up of two characters.
A good example of this is `dž`. And if that character were to appear in an all
uppercase word, then it would display as `DŽ`.
But what if it appears at the beginning of a capitalized word?
That's where _titlecase_ comes into the picture -- `Dž`.
From [wikipedia](https://en.wikipedia.org/wiki/D%C5%BE):
> Note that when the letter is the initial of a capitalised word (like Džungla
> or Džemper, or personal names like Džemal or Džamonja), the ž is not
> uppercase. Only when the whole word is written in uppercase, is the Ž
> capitalised.
(I find it odd that wikipedia's article on this digraph code point is using
separate characters instead of the digraph.)
[source](https://devblogs.microsoft.com/oldnewthing/20241031-00/?p=110443)

View File

@@ -1,34 +0,0 @@
# Download A Google Doc As Specific Format
I was recently given a public Google Doc URL and I was curious if I could
download it from the command line. I didn't want to have to install special CLI
though. I was hoping to use something like `curl`.
A brief chat with Claude and I learned that not only can I use `curl`, but I
can specify the format in the _export_ URL.
```bash
$ export GOOGLE_DOC_URL="https://docs.google.com/document/d/157rMgHeBf76T9TZnUjtrUyyS2XPwG0tObr-OjYNfMaI"
$ echo $GOOGLE_DOC_URL
https://docs.google.com/document/d/157rMgHeBf76T9TZnUjtrUyyS2XPwG0tObr-OjYNfMaI
$ curl -L "$GOOGLE_DOC_URL/export?format=pdf" -o doc.pdf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 414 0 414 0 0 2763 0 --:--:-- --:--:-- --:--:-- 2895
100 16588 0 16588 0 0 56214 0 --:--:-- --:--:-- --:--:-- 167k
$ ls doc.pdf
doc.pdf
```
I append `/export` and then include the `?format=pdf` query param to specify
that I want the document to be exported in PDF format. With the `-o` flag I can
specify the name and extension of the output file.
This is a handy on its own, but noticing that Google Docs supports other export
formats, I thought it would be useful to go back-and-forth with Claude to
sketch out a script that can do this and prompt me (with `fzf`) for the file
type -- [here is the gist for
`gdoc-download`](https://gist.github.com/jbranchaud/cf3d2028107a1bd8484eed7cca0fcdab).

View File

@@ -1,14 +0,0 @@
# Exclude AI Overview From Google Search
At the top of most Google searches these days is a section of text that takes a
moment to appear, presumably because it is being generated in the moment. This
is Google's _AI Overview_. These are sometimes useful summaries of the article
you are about to click on anyway. Other times the overview is no good, it takes
up a bunch of screen real estate, and may even [10x the energy consumed by a
regular
search](https://www.reddit.com/r/technology/comments/1dsvefb/googles_ai_search_summaries_use_10x_more_energy/).
If you want to exclude the _AI Overview_, tack on a `-ai` when writing out your
search query.
[source](https://www.yahoo.com/tech/turn-off-ai-overview-results-170014202.html)

View File

@@ -1,27 +0,0 @@
# Grab The RSS Feed For A Substack Blog
I've been attempting to put more energy into finding and reading blog posts via
an RSS feed reader. This as opposed to scrolling and scrolling and hoping that
the algorithm turns up an interesting article or two.
A lot of people who have been blogging for a while have a handy RSS feed link
prominently displayed on their site. We love to see it!
There are a few people whose writing I really enjoy that distribute their words
via Substack. I couldn't find a prominent or not prominent RSS feed link
anywhere on someone's Substack. What I did learn, after some searching, is that
you can tack `/feed` onto the end of someone's Substack URL and that will give
you the XML feed.
For example:
```
Substack blog landing page URL:
https://registerspill.thorstenball.com
Substack blog RSS feed URL:
https://registerspill.thorstenball.com/feed
```
Grab that feed URL and paste it into your feed reader and you should start
seeing their stuff show up.

View File

@@ -1,28 +0,0 @@
# Verify Site Ownership With DNS Record
To run your site through Google Search Console and get detailed reports, you
need to verify that you own the site. There are several manual ways of doing
this that involve sticking a value unique to your URL in a file or header tag.
There is a better way though.
By adding a TXT DNS record wherever you domain's DNS is managed, you can prove
to Google that you own the domain. That verification applies to all paths and
subdomains of that domain.
Some providers like Cloudflare have a mostly-automated process for this that
Google can hook into as long as you grant permission via OAuth.
You can also manually create the TXT record if necessary.
Either way, it will look something like:
```bash
$ dig -t TXT visualmode.dev
;; ANSWER SECTION:
visualmode.dev. 377 IN TXT "google-site-verification=MBZ2S2fhnh2gHRxFniRrYW-O6mdyimJDRFj-f
vblwtk"
```
More details are provided in the [Google Search Console
docs](https://support.google.com/webmasters/answer/9008080?hl=en#domain_name_verification).

View File

@@ -1,55 +0,0 @@
# Ensure Resources Always Get Closed
Java has a construct known as _try-with-resource_ that allows us to always
ensure opened resources get closed. This is safer than similar cleanup in the
`finally` block which could still leave a memory leak if an error occurs in
that block.
To use the _try-with-resource_ construct, instantiate your opened resource in
parentheses with the `try`.
```java
try (BufferedReader reader = new BufferedReader(new FileReader(filename))) {
// ...
}
```
The resource will be automatically closed when the try/catch block completes.
Here is a full example:
```java
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
public class FileReaderExample {
public static void main(String[] args) {
String fileName = "example.txt";
try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {
String line;
int lineCount = 0;
while ((line = reader.readLine()) != null && lineCount < 5) {
System.out.println(line);
lineCount++;
}
} catch (IOException e) {
System.out.println("An error occurred while reading the file: " + e.getMessage());
}
}
}
```
You can even specify multiple resources in one `try`. The above does that, but
this will make it more obvious:
```java
try (FileReader fr = new FileReader(filename);
BufferedReader br = new BufferedReader(fr)) {
// ...
}
```
[source](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)

View File

@@ -1,47 +0,0 @@
# Install Java On Mac With Brew
If you don't already have Java installed on your Mac, you can install it with
homebrew.
```bash
$ brew install java
```
This will take a bit to run and when all is complete, you'll go to run
something like a version check and see this:
```bash
$ java -version
The operation couldnt be completed. Unable to locate a Java Runtime.
Please visit http://www.java.com for information on installing Java.
```
This is because [OpenJDK](https://openjdk.org/) the open-source implementation
of the Java Development Kit (Java platform) does not get fully set up by
homebrew.
You'll need to symlink `openjdk` and the exact command with correct paths can
be found from running the following:
```bash
$ brew info openjdk
...
For the system Java wrappers to find this JDK, symlink it with
sudo ln -sfn /usr/local/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
...
```
The paths may look different for you, so copy the exact command and run that.
Once the symlink is set, check the version again.
```bash
$ java -version
openjdk version "23" 2024-09-17
OpenJDK Runtime Environment Homebrew (build 23)
OpenJDK 64-Bit Server VM Homebrew (build 23, mixed mode, sharing)
```
[source](https://stackoverflow.com/a/65601197/535590)

View File

@@ -1,27 +0,0 @@
# Run A Hello World Program In Eclipse
First, you'll need to create a new Java Project if you don't already have one
to work in.
From there, you can add a new _Class_ to the `src` folder of that project. I'll
call mine `Greeting.java` and the only thing it will contain is a `main`
method.
```java
public class Greeting {
public static void main(String[] args) {
String name = args.length > 0 ? args[0] : "World";
System.out.println("Hello, " + name + "!");
}
}
```
This method tries to read a name from the arguments given to the program at
time of execution. If one wasn't provided the ternary falls back to `"World"`
as the default name. It then prints the greeting to stdout.
To run this program, we can either select _Run_ from the _Run_ menu (which will
result in `Hello, World!`) or we can select _Run Configurations..._ from the
same menu and add a custom name to _Program Arguments_ under the _Arguments_
tab.

Some files were not shown because too many files have changed in this diff Show More