|
About Technical Help
|
|
0
|
2882
|
October 3, 2017
|
|
Is it possible to schedule uptime?
|
|
0
|
17
|
January 6, 2026
|
|
Rocky or RHEL for On-demand Instance
|
|
0
|
23
|
December 14, 2025
|
|
Possible to upgrade CUDA in the Lambda Stack?
|
|
2
|
74
|
December 2, 2025
|
|
Customer from India
|
|
2
|
241
|
November 27, 2025
|
|
With all due respect: Lambda - CHECK YOUR EMAILS
|
|
1
|
180
|
November 18, 2025
|
|
Initial setup advice
|
|
0
|
55
|
November 4, 2025
|
|
Rent a windows machine
|
|
1
|
73
|
October 14, 2025
|
|
Unable ssh via laptop but can ssh via other server and connect to the Jupyter via browser
|
|
1
|
81
|
October 14, 2025
|
|
Nvidia Profiling support
|
|
1
|
125
|
October 1, 2025
|
|
Hermes3-405B API not responding
|
|
1
|
96
|
October 1, 2025
|
|
GH200 PyTorch installation failure
|
|
1
|
192
|
September 30, 2025
|
|
Unable to create new instance
|
|
1
|
77
|
September 22, 2025
|
|
Booting takes long time and then "alert" status for gpu_1x_h100_pcie
|
|
4
|
2619
|
September 15, 2025
|
|
PS only reaches 28v when ramp is applied
|
|
0
|
39
|
September 9, 2025
|
|
Using docker context with Lambda VM
|
|
1
|
87
|
September 5, 2025
|
|
Cheapest way to upload data into the filesystem with slow upload bandwidth
|
|
3
|
279
|
August 26, 2025
|
|
Is there a list of instance types by region somewhere?
|
|
5
|
135
|
August 26, 2025
|
|
No display on Lambda Vector Pro
|
|
2
|
101
|
August 20, 2025
|
|
2 identical Lambda Quads; one can't connect to display; one can
|
|
1
|
76
|
August 20, 2025
|
|
Lambda Vector One Ram Upgrade
|
|
1
|
73
|
August 20, 2025
|
|
Unable to login with google
|
|
1
|
76
|
August 20, 2025
|
|
Is it possible to persist the workspace configuration?
|
|
2
|
131
|
August 13, 2025
|
|
Adding a host based adapter card to a scalar server
|
|
0
|
40
|
August 1, 2025
|
|
Training Microsoft's MatterGen model doesn't use GPUs
|
|
2
|
91
|
July 10, 2025
|
|
Tool calling in Lambda Inference API
|
|
6
|
319
|
June 24, 2025
|
|
No direct startup script support?
|
|
2
|
165
|
June 15, 2025
|
|
Inference API Timeout
|
|
2
|
126
|
June 6, 2025
|
|
Inferrence Streaming Changed
|
|
0
|
62
|
May 13, 2025
|
|
Downloading speed is slow on H100, only ~70MB/s
|
|
4
|
1696
|
June 8, 2023
|