mirror of
https://github.com/jbranchaud/til
synced 2026-01-18 06:28:02 +00:00
Compare commits
13 Commits
165049a865
...
e00ae58d87
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e00ae58d87 | ||
|
|
63bb627716 | ||
|
|
21385f4491 | ||
|
|
5b47326ab3 | ||
|
|
c16d80fd94 | ||
|
|
edf38308da | ||
|
|
dc7159c16c | ||
|
|
33f780a69f | ||
|
|
dfe9c002ee | ||
|
|
3e34636d80 | ||
|
|
dcef57d344 | ||
|
|
6580393b7a | ||
|
|
295fe153ad |
16
README.md
16
README.md
@@ -10,7 +10,7 @@ pairing with smart people at Hashrocket.
|
||||
|
||||
For a steady stream of TILs, [sign up for my newsletter](https://crafty-builder-6996.ck.page/e169c61186).
|
||||
|
||||
_1481 TILs and counting..._
|
||||
_1491 TILs and counting..._
|
||||
|
||||
---
|
||||
|
||||
@@ -27,6 +27,7 @@ _1481 TILs and counting..._
|
||||
* [Deno](#deno)
|
||||
* [Devops](#devops)
|
||||
* [Docker](#docker)
|
||||
* [Drizzle](#drizzle)
|
||||
* [Elixir](#elixir)
|
||||
* [Gatsby](#gatsby)
|
||||
* [Git](#git)
|
||||
@@ -208,6 +209,12 @@ _1481 TILs and counting..._
|
||||
- [List Running Docker Containers](docker/list-running-docker-containers.md)
|
||||
- [Run A Basic PostgreSQL Server In Docker](docker/run-a-basic-postgresql-server-in-docker.md)
|
||||
|
||||
### Drizzle
|
||||
|
||||
- [Create bigint Identity Column For Primary Key](drizzle/create-bigint-identity-column-for-primary-key.md)
|
||||
- [Drizzle Tracks Migrations In A Log Table](drizzle/drizzle-tracks-migrations-in-a-log-table.md)
|
||||
- [Get Fields For Inserted Row](drizzle/get-fields-for-inserted-row.md)
|
||||
|
||||
### Elixir
|
||||
|
||||
- [All Values For A Key In A Keyword List](elixir/all-values-for-a-key-in-a-keyword-list.md)
|
||||
@@ -433,6 +440,7 @@ _1481 TILs and counting..._
|
||||
### Internet
|
||||
|
||||
- [Add Emoji To GitHub Repository Description](internet/add-emoji-to-github-repository-description.md)
|
||||
- [Analyze Your Website Performance](internet/analyze-your-website-performance.md)
|
||||
- [Check Your Public IP Address](internet/check-your-public-ip-address.md)
|
||||
- [Enable Keyboard Shortcuts In Gmail](internet/enable-keyboard-shortcuts-in-gmail.md)
|
||||
- [Exclude AI Overview From Google Search](internet/exclude-ai-overview-from-google-search.md)
|
||||
@@ -505,6 +513,7 @@ _1481 TILs and counting..._
|
||||
- [List Top-Level NPM Dependencies](javascript/list-top-level-npm-dependencies.md)
|
||||
- [Load And Use Env Var In Node Script](javascript/load-and-use-env-var-in-node-script.md)
|
||||
- [Make The Browser Editable With Design Mode](javascript/make-the-browser-editable-with-design-mode.md)
|
||||
- [Make Truly Deep Clone With Structured Clone](javascript/make-truly-deep-clone-with-structured-clone.md)
|
||||
- [Matching A Computed Property In Function Args](javascript/matching-a-computed-property-in-function-args.md)
|
||||
- [Matching Multiple Values In A Switch Statement](javascript/matching-multiple-values-in-a-switch-statement.md)
|
||||
- [Mock A Function With Return Values Using Jest](javascript/mock-a-function-with-return-values-using-jest.md)
|
||||
@@ -515,6 +524,7 @@ _1481 TILs and counting..._
|
||||
- [Open Global npm Config File](javascript/open-global-npm-config-file.md)
|
||||
- [Parse A Date From A Timestamp](javascript/parse-a-date-from-a-timestamp.md)
|
||||
- [Pre And Post Hooks For Yarn Scripts](javascript/pre-and-post-hooks-for-yarn-scripts.md)
|
||||
- [Prevent Hidden Element From Flickering On Load](javascript/prevent-hidden-element-from-flickering-on-load.md)
|
||||
- [Purge Null And Undefined Values From Object](javascript/purge-null-and-undefined-values-from-object.md)
|
||||
- [Random Cannot Be Seeded](javascript/random-cannot-be-seeded.md)
|
||||
- [Reach Into An Object For Nested Data With Get](javascript/reach-into-an-object-for-nested-data-with-get.md)
|
||||
@@ -696,6 +706,7 @@ _1481 TILs and counting..._
|
||||
- [A Better Null Display Character](postgres/a-better-null-display-character.md)
|
||||
- [Add Foreign Key Constraint Without A Full Lock](postgres/add-foreign-key-constraint-without-a-full-lock.md)
|
||||
- [Add ON DELETE CASCADE To Foreign Key Constraint](postgres/add-on-delete-cascade-to-foreign-key-constraint.md)
|
||||
- [Add Unique Constraint Using Existing Index](postgres/add-unique-constraint-using-existing-index.md)
|
||||
- [Adding Composite Uniqueness Constraints](postgres/adding-composite-uniqueness-constraints.md)
|
||||
- [Aggregate A Column Into An Array](postgres/aggregate-a-column-into-an-array.md)
|
||||
- [Assumed Radius Of The Earth](postgres/assumed-radius-of-the-earth.md)
|
||||
@@ -715,6 +726,7 @@ _1481 TILs and counting..._
|
||||
- [Compute Hashes With pgcrypto](postgres/compute-hashes-with-pgcrypto.md)
|
||||
- [Compute The Levenshtein Distance Of Two Strings](postgres/compute-the-levenshtein-distance-of-two-strings.md)
|
||||
- [Compute The md5 Hash Of A String](postgres/compute-the-md5-hash-of-a-string.md)
|
||||
- [Concatenate Strings With A Separator](postgres/concatenate-strings-with-a-separator.md)
|
||||
- [Configure The Timezone](postgres/configure-the-timezone.md)
|
||||
- [Constructing A Range Of Dates](postgres/constructing-a-range-of-dates.md)
|
||||
- [Convert A String To A Timestamp](postgres/convert-a-string-to-a-timestamp.md)
|
||||
@@ -994,6 +1006,7 @@ _1481 TILs and counting..._
|
||||
- [Select Value For SQL Counts](rails/select-value-for-sql-counts.md)
|
||||
- [Serialize With fast_jsonapi In A Rails App](rails/serialize-with-fast-jsonapi-in-a-rails-app.md)
|
||||
- [Set A Timestamp Field To The Current Time](rails/set-a-timestamp-field-to-the-current-time.md)
|
||||
- [Set DateTime To Include Time Zone In Migrations](rails/set-datetime-to-include-time-zone-in-migrations.md)
|
||||
- [Set default_url_options For Entire Application](rails/set-default-url-options-for-entire-application.md)
|
||||
- [Set Schema Search Path](rails/set-schema-search-path.md)
|
||||
- [Set Statement Timeout For All Postgres Connections](rails/set-statement-timeout-for-all-postgres-connections.md)
|
||||
@@ -1432,6 +1445,7 @@ _1481 TILs and counting..._
|
||||
- [Generate Random 20-Character Hex String](unix/generate-random-20-character-hex-string.md)
|
||||
- [Get A List Of Locales On Your System](unix/get-a-list-of-locales-on-your-system.md)
|
||||
- [Get Matching Filenames As Output From Grep](unix/get-matching-filenames-as-output-from-grep.md)
|
||||
- [Get The SHA256 Hash For A File](unix/get-the-sha256-hash-for-a-file.md)
|
||||
- [Get The Unix Timestamp](unix/get-the-unix-timestamp.md)
|
||||
- [Global Substitution On The Previous Command](unix/global-substitution-on-the-previous-command.md)
|
||||
- [Globbing For All Directories In Zsh](unix/globbing-for-all-directories-in-zsh.md)
|
||||
|
||||
48
drizzle/create-bigint-identity-column-for-primary-key.md
Normal file
48
drizzle/create-bigint-identity-column-for-primary-key.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Create bigint Identity Column For Primary Key
|
||||
|
||||
Using the Drizzle ORM with Postgres, here is how we can create a table that
|
||||
uses a [`bigint` data
|
||||
type](https://orm.drizzle.team/docs/column-types/pg#bigint) as a primary key
|
||||
[identity
|
||||
column](https://www.postgresql.org/docs/current/ddl-identity-columns.html).
|
||||
|
||||
```typescript
|
||||
import {
|
||||
pgTable,
|
||||
bigint,
|
||||
text,
|
||||
timestamp,
|
||||
} from "drizzle-orm/pg-core";
|
||||
|
||||
// Users table
|
||||
export const users = pgTable("users", {
|
||||
id: bigint({ mode: 'bigint' }).primaryKey().generatedAlwaysAsIdentity(),
|
||||
email: text("email").unique().notNull(),
|
||||
name: text("name").notNull(),
|
||||
createdAt: timestamp("created_at").defaultNow().notNull(),
|
||||
});
|
||||
```
|
||||
|
||||
There are a couple key pieces here:
|
||||
|
||||
1. We import `bigint` so that we can declare a column of that type.
|
||||
2. We specify that it is a primary key with `.primaryKey()`.
|
||||
3. We declare its default value as `generated always as identity` via
|
||||
`.generatedAlwaysAsIdentity()`.
|
||||
|
||||
Note: you need to specify the `mode` for `bigint` or else you will see a
|
||||
`TypeError: Cannot read properties of undefined (reading 'mode')` error.
|
||||
|
||||
If we run `npx drizzle-kit generate` the SQL migration file that gets
|
||||
generated will contain something like this:
|
||||
|
||||
```sql
|
||||
--> statement-breakpoint
|
||||
CREATE TABLE IF NOT EXISTS "users" (
|
||||
"id" bigint PRIMARY KEY GENERATED ALWAYS AS IDENTITY (sequence name "users_id_seq" INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1),
|
||||
"email" text NOT NULL,
|
||||
"name" text NOT NULL,
|
||||
"created_at" timestamp DEFAULT now() NOT NULL,
|
||||
CONSTRAINT "users_email_unique" UNIQUE("email")
|
||||
);
|
||||
```
|
||||
39
drizzle/drizzle-tracks-migrations-in-a-log-table.md
Normal file
39
drizzle/drizzle-tracks-migrations-in-a-log-table.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Drizzle Tracks Migrations In A Log Table
|
||||
|
||||
When I generate (`npx drizzle-kit generate`) and apply (`npx drizzle-kit
|
||||
migrate`) schema migrations against my database with Drizzle, there are SQL
|
||||
files that get created and run.
|
||||
|
||||
How does Drizzle know which SQL files have been run and which haven't?
|
||||
|
||||
Like many SQL schema migration tools, it uses a table in the database to record
|
||||
this metadata. Drizzle defaults to calling this table `__drizzle_migrations`
|
||||
and puts it in the `drizzle` schema (which is like a database namespace).
|
||||
|
||||
Let's take a look at this table for a project with two migrations:
|
||||
|
||||
```sql
|
||||
postgres> \d drizzle.__drizzle_migrations
|
||||
Table "drizzle.__drizzle_migrations"
|
||||
Column | Type | Collation | Nullable | Default
|
||||
------------+---------+-----------+----------+----------------------------------------------------------
|
||||
id | integer | | not null | nextval('drizzle.__drizzle_migrations_id_seq'::regclass)
|
||||
hash | text | | not null |
|
||||
created_at | bigint | | |
|
||||
Indexes:
|
||||
"__drizzle_migrations_pkey" PRIMARY KEY, btree (id)
|
||||
|
||||
postgres> select * from drizzle.__drizzle_migrations;
|
||||
id | hash | created_at
|
||||
----+------------------------------------------------------------------+---------------
|
||||
1 | 8961353bf66f9b3fe1a715f6ea9d9ef2bc65697bb8a5c2569df939a61e72a318 | 1730219291288
|
||||
2 | b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805 | 1730224013018
|
||||
(2 rows)
|
||||
```
|
||||
|
||||
Notice that Drizzle stores each migration record as [a SHA256 hash of the
|
||||
migration
|
||||
file](https://github.com/drizzle-team/drizzle-orm/blob/526996bd2ea20d5b1a0d65e743b47e23329d441c/drizzle-orm/src/migrator.ts#L52)
|
||||
and a timestamp of when the migration was run.
|
||||
|
||||
[source](https://orm.drizzle.team/docs/drizzle-kit-migrate#applied-migrations-log-in-the-database)
|
||||
56
drizzle/get-fields-for-inserted-row.md
Normal file
56
drizzle/get-fields-for-inserted-row.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Get Fields For Inserted Row
|
||||
|
||||
With Drizzle, we can insert a row with a set of values like so:
|
||||
|
||||
```typescript
|
||||
await db
|
||||
.insert(todoItems)
|
||||
.values({
|
||||
title,
|
||||
userId,
|
||||
description,
|
||||
})
|
||||
```
|
||||
|
||||
The result of this is `QueryResult<never>`. In other words, nothing useful is
|
||||
coming back to us from the database.
|
||||
|
||||
Sometimes an insert is treated as a fire-and-forget (as long as it succeeds) or
|
||||
since we know what data we are inserting, we don't need the database to
|
||||
response. But what about values that are generated or computed by the database
|
||||
-- such as an id from a sequence, timestamp columns that default to `now()`, or
|
||||
generated columns.
|
||||
|
||||
To get all the fields of a freshly inserted row, we can tack on [the
|
||||
`returning()` function](https://orm.drizzle.team/docs/insert#insert-returning)
|
||||
(which likely adds something like [`returning
|
||||
*`](https://www.postgresql.org/docs/current/dml-returning.html)) to the insert
|
||||
query under the hood).
|
||||
|
||||
```typescript
|
||||
await db
|
||||
.insert(todoItems)
|
||||
.values({
|
||||
title,
|
||||
userId,
|
||||
description,
|
||||
})
|
||||
.returning()
|
||||
```
|
||||
|
||||
This will have a return type of `Array<type todoItems>` which means that for
|
||||
each inserted row we'll have all the fields (columns) for that row.
|
||||
|
||||
Alternatively, if we just need the generated ID for the new row(s), we can use
|
||||
a partial return like so:
|
||||
|
||||
```typescript
|
||||
await db
|
||||
.insert(todoItems)
|
||||
.values({
|
||||
title,
|
||||
userId,
|
||||
description,
|
||||
})
|
||||
.returning({ id: todoItems.id })
|
||||
```
|
||||
21
internet/analyze-your-website-performance.md
Normal file
21
internet/analyze-your-website-performance.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Analyze Your Website Performance
|
||||
|
||||
The [PageSpeed Insights](https://pagespeed.web.dev/) tool from Google is a
|
||||
great way to quickly get actionable insights about where to improve your
|
||||
website and app's _Performance_, _Accessibility_, and _SEO_.
|
||||
|
||||
To see how your public site or app does, grab its URL and analyze it at
|
||||
[PageSpeed Insights](https://pagespeed.web.dev/).
|
||||
|
||||
It will take a minute to run on either Mobile or Desktop (make sure to check
|
||||
both) and then will output four headline numbers (out of 100) for each of the
|
||||
categories.
|
||||
|
||||
You can then dig in to each category to see what recommendations they make for
|
||||
improving your score.
|
||||
|
||||
This can also be run directly from Chrome devtools which is useful if you want
|
||||
to see how a locally running site is doing. You can run the analysis from the
|
||||
_Lighthouse_ tab of devtools. Note: if the _Performance_ score looks bad, it
|
||||
might be that you are running a non-optimized dev server that isn't reflective
|
||||
of how your site would do in production.
|
||||
@@ -5,6 +5,8 @@ an array-like object with all of the arguments to the function. Even if not
|
||||
all of the arguments are referenced in the function signature, they can
|
||||
still be accessed via the `arguments` object.
|
||||
|
||||
> For ES6+ compatibility, the `spread` operator used via [rest parameters](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters) is preferred over the `arugments` object when accessing an abritrary number of function arguments.
|
||||
|
||||
```javascript
|
||||
function argTest(one) {
|
||||
console.log(one);
|
||||
|
||||
61
javascript/make-truly-deep-clone-with-structured-clone.md
Normal file
61
javascript/make-truly-deep-clone-with-structured-clone.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Make Truly Deep Clone With Structured Clone
|
||||
|
||||
There are a lot of ways to make a copy of an object. They are all hacks and
|
||||
they all fail in certain circumstances. Using the spread trick only gives you a
|
||||
shallow copy where references to nested objects and arrays can still be
|
||||
updated. The `JSON.stringify` trick has to make things like dates into strings,
|
||||
so it is lossy.
|
||||
|
||||
There is however now a dedicated method for deep copies with broad support
|
||||
called
|
||||
[`structuredClone`](https://developer.mozilla.org/en-US/docs/Web/API/Window/structuredClone).
|
||||
It is available on `window`. Let's take a look at it and see how it comparse to
|
||||
the spread operator trick.
|
||||
|
||||
```javascript
|
||||
> // some data setup
|
||||
|
||||
> const data = { one: 1, two: 2, rest: [3,4,5] }
|
||||
|
||||
> const obj = { hello: 'world', taco: 'bell', data }
|
||||
|
||||
> const shallowObj = { ...obj }
|
||||
|
||||
> const deepObj = structuredClone(obj)
|
||||
|
||||
> // let's modify the original `data.rest` array
|
||||
|
||||
> data.rest.push(6)
|
||||
4
|
||||
> data
|
||||
{ one: 1, two: 2, rest: [ 3, 4, 5, 6 ] }
|
||||
|
||||
> // now let's see who was impacted by that mutation
|
||||
|
||||
> obj
|
||||
{
|
||||
hello: 'world',
|
||||
taco: 'bell',
|
||||
data: { one: 1, two: 2, rest: [ 3, 4, 5, 6 ] }
|
||||
}
|
||||
|
||||
> shallowObj
|
||||
{
|
||||
hello: 'world',
|
||||
taco: 'bell',
|
||||
data: { one: 1, two: 2, rest: [ 3, 4, 5, 6 ] }
|
||||
}
|
||||
|
||||
> deepObj
|
||||
{
|
||||
hello: 'world',
|
||||
taco: 'bell',
|
||||
data: { one: 1, two: 2, rest: [ 3, 4, 5 ] }
|
||||
}
|
||||
```
|
||||
|
||||
The `shallowObj` from the spread operator copy was mutated even though we
|
||||
didn't intend for that. The `deepObj` from `structuredClone` was a true deep
|
||||
copy and was unaffected.
|
||||
|
||||
[source](https://www.builder.io/blog/structured-clone)
|
||||
55
javascript/prevent-hidden-element-from-flickering-on-load.md
Normal file
55
javascript/prevent-hidden-element-from-flickering-on-load.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Prevent Hidden Element From Flickering On Load
|
||||
|
||||
Here is what it might look like to use [Alpine.js](https://alpinejs.dev/) to
|
||||
sprinkle in some JavaScript for controlling a dropdown menu.
|
||||
|
||||
```html
|
||||
<div x-data="{ profileDropdownOpen: false }">
|
||||
<button
|
||||
type="button"
|
||||
@click="profileDropdownOpen = !profileDropdownOpen"
|
||||
>
|
||||
<!-- some inner html -->
|
||||
</button>
|
||||
<div x-show="profileDropdownOpen" role="menu">
|
||||
<a href="/profile" role="menuitem">Your Profile</a>
|
||||
<a href="/sign-out" role="menuitem">Sign Out</a>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
Functionally that will work. You can click the button to toggle the menu open
|
||||
and closed.
|
||||
|
||||
What you might notice, however, when you refresh the page is that the menu
|
||||
flickers open as the page first loads and then disappears. This is a quirk of
|
||||
the element being rendered before Alpine.js is loaded and the
|
||||
[`x-show`](https://alpinejs.dev/directives/show) directive has a chance to take
|
||||
effect.
|
||||
|
||||
To get around this, we can _cloak_ any element with an `x-show` directive that
|
||||
should be hidden by default.
|
||||
|
||||
```html
|
||||
<div x-data="{ profileDropdownOpen: false }">
|
||||
<button
|
||||
type="button"
|
||||
@click="profileDropdownOpen = !profileDropdownOpen"
|
||||
>
|
||||
<!-- some inner html -->
|
||||
</button>
|
||||
<div x-cloak x-show="profileDropdownOpen" role="menu">
|
||||
<a href="/profile" role="menuitem">Your Profile</a>
|
||||
<a href="/sign-out" role="menuitem">Sign Out</a>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
This addition needs to be paired with some custom CSS to hide any _cloaked_
|
||||
elements.
|
||||
|
||||
```css
|
||||
[x-cloak] { display: none !important; }
|
||||
```
|
||||
|
||||
[source](https://alpinejs.dev/directives/cloak)
|
||||
25
postgres/add-unique-constraint-using-existing-index.md
Normal file
25
postgres/add-unique-constraint-using-existing-index.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Add Unique Constraint Using Existing Index
|
||||
|
||||
Adding a unique constraint to an existing column on a production table can
|
||||
block updates. If we need to avoid this kind of locking for the duration of
|
||||
index creation, then we can first create the index concurrently and then use
|
||||
that existing index to back the unique constraint.
|
||||
|
||||
```sql
|
||||
create index concurrently users_email_idx on users (email);
|
||||
|
||||
-- wait for that to complete
|
||||
|
||||
alter table users
|
||||
add constraint unique_users_email unique using index users_email_idx;
|
||||
```
|
||||
|
||||
First, we concurrently create the index. The time this takes will depend on how
|
||||
large the table is. That's the blocking time we are avoiding with this
|
||||
approach. Then once that completes we can apply a unique constraint using that
|
||||
preexisting index.
|
||||
|
||||
Note: if a non-unique value exists in the table for that column, adding the
|
||||
constraint will fail. You'll need to deal with that _duplicate_ value first.
|
||||
|
||||
[source](https://dba.stackexchange.com/questions/81627/postgresql-9-3-add-unique-constraint-using-an-existing-unique-index)
|
||||
53
postgres/concatenate-strings-with-a-separator.md
Normal file
53
postgres/concatenate-strings-with-a-separator.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Concatenate Strings With A Separator
|
||||
|
||||
I was putting together an example of using a generated column that concatenates
|
||||
string values from a few other columns. I used manual concatenation with the
|
||||
`||` operator like so:
|
||||
|
||||
```sql
|
||||
create table folders (
|
||||
id integer generated always as identity primary key,
|
||||
user_id integer not null,
|
||||
name text not null,
|
||||
parent_folder_id integer references folders(id),
|
||||
path text generated always as (
|
||||
user_id::text || ':' || lower(name) || ':' || coalesce(parent_folder_id::text, '0')
|
||||
) stored
|
||||
);
|
||||
```
|
||||
|
||||
Instead of doing that manual concatenation for the `path` generated column, I
|
||||
can use
|
||||
[`concat_ws`](https://www.postgresql.org/docs/current/functions-string.html).
|
||||
|
||||
```sql
|
||||
create table folders (
|
||||
id integer generated always as identity primary key,
|
||||
user_id integer not null,
|
||||
name text not null,
|
||||
parent_folder_id integer references folders(id),
|
||||
path text generated always as (
|
||||
concat_ws(
|
||||
':',
|
||||
user_id::text,
|
||||
lower(name),
|
||||
coalesce(parent_folder_id::text, '0')
|
||||
)
|
||||
) stored
|
||||
);
|
||||
```
|
||||
|
||||
The first argument to `concat_ws` is the separator I want to use. The remaining
|
||||
arguments are the strings that should be concatenated with that separator.
|
||||
|
||||
One other things that is nice about `concat_ws` is that it will ignore `null`
|
||||
values that it receives.
|
||||
|
||||
```sql
|
||||
> select concat_ws(':', 'one', 'two', null, 'three');
|
||||
+---------------+
|
||||
| concat_ws |
|
||||
|---------------|
|
||||
| one:two:three |
|
||||
+---------------+
|
||||
```
|
||||
45
rails/set-datetime-to-include-time-zone-in-migrations.md
Normal file
45
rails/set-datetime-to-include-time-zone-in-migrations.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Set Datetime To Include Time Zone In Migrations
|
||||
|
||||
When using Rails and PostgreSQL, your migrations will contain DSL syntax like
|
||||
`t.datetime` and `t.timestamps` which will produce columns using the
|
||||
`timestamp` (`without time zone`) Postgres data type.
|
||||
|
||||
While reading [A Simple Explanation of Postgres' <code>Timestamp with Time
|
||||
Zone</code>](https://naildrivin5.com/blog/2024/10/10/a-simple-explanation-of-postgres-timestamp-with-time-zone.html),
|
||||
I learned that there is a way to configure your app to instead use
|
||||
`timestamptz` by default. This data type is widely recommended as a good
|
||||
default, so it is nice that we can configure Rails to use it.
|
||||
|
||||
First, add these lines to a new initializer (`config/initializers/postgres.rb`)
|
||||
file.
|
||||
|
||||
```ruby
|
||||
require "active_record/connection_adapters/postgresql_adapter"
|
||||
ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.datetime_type = :timestamptz
|
||||
```
|
||||
|
||||
Alternatively, you can configure this via `config/application.rb` per the
|
||||
[Configuring ActiveRecord
|
||||
docs](https://guides.rubyonrails.org/configuring.html#activerecord-connectionadapters-postgresqladapter-datetime-type).
|
||||
|
||||
Then, if you have a new migration like the following:
|
||||
|
||||
```ruby
|
||||
class AddEventsTable < ActiveRecord::Migration[7.2]
|
||||
def change
|
||||
create_table :events do |t|
|
||||
t.string :title
|
||||
t.text :description
|
||||
t.datetime :start_time
|
||||
t.datetime :end_time
|
||||
t.timestamps
|
||||
end
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
you can expect to have four `timestamptz` columns, namely `start_time`,
|
||||
`end_time`, `created_at`, and `updated_at`.
|
||||
|
||||
Here is the [Rails PR](https://github.com/rails/rails/pull/41084) that adds
|
||||
this config option.
|
||||
34
unix/get-the-sha256-hash-for-a-file.md
Normal file
34
unix/get-the-sha256-hash-for-a-file.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Get The SHA256 Hash For A File
|
||||
|
||||
Unix systems come with a `sha256sum` utility that we can use to compute the
|
||||
SHA256 hash of a file. This means the contents of file are compressed into a
|
||||
256-bit digest.
|
||||
|
||||
Here I use it on a SQL migration file that I've generated.
|
||||
|
||||
```bash
|
||||
$ sha256sum migrations/0001_large_doctor_spectrum.sql
|
||||
b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805 migrations/0001_large_doctor_spectrum.sql
|
||||
```
|
||||
|
||||
Each file passed to this utility gets output to a separate line which is why we
|
||||
see the filename next to the hash. Since I am only running it on a single file
|
||||
and I may want to pipe the output to some other program, I can clip off just
|
||||
the part I need.
|
||||
|
||||
```bash
|
||||
sha256sum migrations/0001_large_doctor_spectrum.sql | cut -d ' ' -f 1
|
||||
b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805
|
||||
```
|
||||
|
||||
We can also produce these digests with `openssl`:
|
||||
|
||||
```bash
|
||||
$ openssl dgst -sha256 migrations/0001_large_doctor_spectrum.sql
|
||||
SHA2-256(migrations/0001_large_doctor_spectrum.sql)= b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805
|
||||
|
||||
$ openssl dgst -sha256 migrations/0001_large_doctor_spectrum.sql | cut -d ' ' -f 2
|
||||
b75e61451e2ce37d831608b1bc9231bf3af09e0ab54bf169be117de9d4ff6805
|
||||
```
|
||||
|
||||
See `sha256sum --help` or `openssl dgst --help` for more details.
|
||||
Reference in New Issue
Block a user