
Sometimes you’ll want to initiate a Selenium or Playwright session with an existing set of cookies. My approach to this is to retrieve those cookies using a browser and save them to a file so that I can easily load them into my script.
Get the Cookies
You can find the current cookies in both Chrome and Firefox using Developer Tools. As you might expect, the location is different for each browser.
For Chrome you should select the Application tab and then expand the Cookies item in the menu. You can then filter the cookies using the list of domains. In the screenshot below I’m only looking at the cookies that apply to the root domain at www.google.com. There are additional cookies for another sub-domain, but those are not currently selected.

Once you’ve found the cookies you can copy them onto the clipboard. Just click on any one of the listed cookies then select them all with Ctrl-A
and copy with Ctrl-C
.
On Firefox you select the Storage tab and then again expand the Cookies item in the menu.

Unfortunately there doesn’t seem to be a comparable way to select and copy all of the cookies in Firefox. However, there is a SQLite database in cookies.sqlite
which has all of the cookie data. On the one hand this is not as convenient as being able to simply copy all of the cookies directly from Developer Tools, but on the other hand it presents the opportunity to automate the process.
In the interests of getting those cookies quickly and easily we’ll focus our attention on Chrome. After you’ve copied the cookies onto the clipboard, paste them into an editor and save as a text file. I’ve saved mine to chrome-cookies.tsv
, where the .tsv
extension indicates that the data are in tab-separated format. Next we’ll wrestle them into JSON.
📢 You might find that the columns in your table are different to those in mine. There’s some variation with different versions of Chrome. I’m using Chromium 135.0.7049.114. You’ll need to adapt the Python script below accordingly.
Load and Export
The script below loads the .tsv
file and then exports as JSON with all of the fields required by Selenium and Playwright.
import pandas as pd
import json
from datetime import datetime
COLUMNS = [
"name",
"value",
"domain",
"path",
"expires",
"size",
"http_only",
"secure",
"same_site",
"partition",
"cross_site",
"priority",
]
df = pd.read_csv("chrome-cookies.tsv", sep="\t", names=COLUMNS, index_col=False)
# Fill in missing values.
df["value"] = df["value"].fillna("")
# Drop unused columns.
df.drop(
[
"size",
"http_only",
"same_site",
"partition",
"cross_site",
"priority",
],
axis=1,
inplace=True,
)
# Convert Boolean column.
df["secure"] = df["secure"] == "✓"
def normalize_expires(t):
if pd.isna(t) or t == "Session":
return None
try:
return datetime.fromisoformat(t.replace("Z", "+00:00")).timestamp()
except ValueError:
try:
return float(t)
except ValueError:
return None
df["expires"] = df["expires"].apply(normalize_expires)
# Drop session cookies.
df = df[df["expires"].notna()]
with open("cookies.json", "w", encoding="utf-8") as f:
json.dump(df.to_dict(orient="records"), f, indent=2)
If you print the cookies data frame after pre-processing then you’d see something like this:
name value domain path expires secure
0 AEC AVcja2du2QCPvscFfTNXM... .google.com / 2025-10-30T03:12:49.078Z True
1 NID 523=IeMFFbAxphsD2uPBX... .google.com / 2026-06-02T19:31:07.405Z True
2 SOCS CAISHAgBEhJnd3NfMjAyN... .google.com / 2026-06-02T03:12:53.405Z True
All of those fields will be exported to a JSON file, cookies.json
. Only the name
, value
and domain
fields are required. The others are just useful information. We can now use the resulting cookies.json
.
Loading Cookies
Playwright
Here’s a script that opens the site with the saved cookies using Playwright.
import json
from playwright.sync_api import sync_playwright
with open("cookies.json", "r", encoding="utf-8") as f:
cookies = json.load(f)
with sync_playwright() as p:
browser = p.chromium.launch(headless=False)
context = browser.new_context()
# Visit domain to set origin.
page = context.new_page()
page.goto("https://www.google.com")
# Add cookies.
context.add_cookies(cookies)
# Refresh or navigate again to apply cookies.
page.goto("https://www.google.com")
browser.close()
It seems extravagant to load the site twice (once before adding the cookies and then again afterwards) but this seems to be the most reliable approach. You can skip pre-loading the site and simply hope for the best. This might work. But equally, it might not. Perhaps best not to take a chance?
More information on adding cookies to Playwright can be found in the docs.
Selenium
This is the equivalent script using Selenium.
import json
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
with open("cookies.json", "r", encoding="utf-8") as f:
cookies = json.load(f)
options = Options()
driver = webdriver.Chrome(options=options)
# Visit domain to set origin.
driver.get("https://www.google.com")
time.sleep(2)
for cookie in cookies:
if "expires" in cookie:
cookie["expiry"] = int(cookie["expires"])
del cookie["expires"]
if cookie.get("domain", "").startswith("."):
cookie["domain"] = cookie["domain"][1:]
driver.add_cookie(cookie)
# Reload to apply cookies.
driver.get("https://www.google.com")
💡 Selenium insists on expiry
rather than expires
.
Details of working with cookies in Selenium can be found in the docs.