mirror of
https://github.com/kemko/icecast-ripper.git
synced 2026-01-01 15:55:42 +03:00
Integrate MP3 duration extraction and silence handling (#6)
* feat: integrate MP3 duration extraction and silence handling - Added a new dependency on github.com/tcolgate/mp3 for MP3 frame decoding. - Implemented actual MP3 duration retrieval in the scanRecordings function, falling back to an estimation if retrieval fails. - Introduced a silent frame asset for generating silence in audio streams. - Created a silence reader to provide a continuous stream of silent frames. - Added necessary documentation and licensing for the new MP3 package. - Updated .gitignore to exclude MP3 files except for the internal data directory. * feat: update README and improve application configuration and logging * refactor: remove unused GenerateFileHash function and improve resource cleanup in MP3 and recorder modules
This commit is contained in:
123
README.md
123
README.md
@@ -1,102 +1,89 @@
|
||||
# Icecast Ripper
|
||||
|
||||
A Go application that monitors and records Icecast audio streams. It detects when streams go live, automatically records the audio content, and generates an RSS feed of recordings.
|
||||
A lightweight Go application that automatically monitors Icecast audio streams, records them when they go live, and serves recordings via an RSS feed for podcast clients.
|
||||
|
||||
## Features
|
||||
|
||||
- **Automatic Stream Monitoring**: Periodically checks if a stream is active
|
||||
- **Intelligent Recording**: Records audio streams when they become active
|
||||
- **RSS Feed Generation**: Provides an RSS feed of recorded streams
|
||||
- **Web Interface**: Simple HTTP server for accessing recordings and RSS feed
|
||||
- **Docker Support**: Run easily in containers with Docker and Docker Compose
|
||||
- **Configurable**: Set recording paths, check intervals, and more via environment variables
|
||||
- **Smart Stream Detection**: Monitors Icecast streams and detects when they go live
|
||||
- **Automatic Recording**: Records live streams to MP3 files with timestamps
|
||||
- **Podcast-Ready RSS Feed**: Generates an RSS feed compatible with podcast clients
|
||||
- **Web Server**: Built-in HTTP server for accessing recordings and RSS feed
|
||||
- **Containerized**: Ready to run with Docker and Docker Compose
|
||||
- **Configurable**: Easy configuration via environment variables
|
||||
|
||||
## Installation
|
||||
## Quick Start
|
||||
|
||||
### Binary Installation
|
||||
|
||||
1. Download the latest release from the [GitHub releases page](https://github.com/kemko/icecast-ripper/releases)
|
||||
2. Extract the binary to a location in your PATH
|
||||
3. Run the binary with the required configuration (see Configuration section)
|
||||
|
||||
### Docker Installation
|
||||
|
||||
Pull the Docker image:
|
||||
### Using Docker
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/kemko/icecast-ripper:master
|
||||
docker run -d \
|
||||
--name icecast-ripper \
|
||||
-p 8080:8080 \
|
||||
-e STREAM_URL=http://example.com:8000/stream \
|
||||
-v ./recordings:/recordings \
|
||||
ghcr.io/kemko/icecast-ripper:latest
|
||||
```
|
||||
|
||||
Or use Docker Compose (see the Docker Compose section below).
|
||||
### Using Docker Compose
|
||||
|
||||
### Building From Source
|
||||
```yaml
|
||||
services:
|
||||
icecast-ripper:
|
||||
image: ghcr.io/kemko/icecast-ripper:latest
|
||||
ports:
|
||||
- "8080:8080"
|
||||
environment:
|
||||
- STREAM_URL=http://example.com:8000/stream
|
||||
- PUBLIC_URL=https://your-domain.com # For RSS feed links
|
||||
volumes:
|
||||
- ./recordings:/recordings
|
||||
```
|
||||
|
||||
Requires Go 1.24 or higher.
|
||||
### Running the Binary
|
||||
|
||||
Download the latest release and run:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kemko/icecast-ripper.git
|
||||
cd icecast-ripper
|
||||
go build -o icecast-ripper ./cmd/icecast-ripper/main.go
|
||||
export STREAM_URL=http://example.com:8000/stream
|
||||
./icecast-ripper
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Icecast Ripper is configured through environment variables:
|
||||
Configure Icecast Ripper with these environment variables:
|
||||
|
||||
| Environment Variable | Description | Default | Required |
|
||||
|----------------------|-------------|---------|----------|
|
||||
| Variable | Description | Default | Required |
|
||||
|----------|-------------|---------|----------|
|
||||
| `STREAM_URL` | URL of the Icecast stream to monitor | - | Yes |
|
||||
| `CHECK_INTERVAL` | Interval between stream checks (e.g., 1m, 30s) | 1m | No |
|
||||
| `RECORDINGS_PATH` | Path where recordings are stored | ./recordings | No |
|
||||
| `TEMP_PATH` | Path for temporary files | ./temp | No |
|
||||
| `BIND_ADDRESS` | Address and port for the HTTP server | :8080 | No |
|
||||
| `PUBLIC_URL` | Public URL for the RSS feed | <http://localhost:8080> | No |
|
||||
| `CHECK_INTERVAL` | How often to check if the stream is live | 1m | No |
|
||||
| `RECORDINGS_PATH` | Where to store recordings | ./recordings | No |
|
||||
| `TEMP_PATH` | Where to store temporary files | /tmp | No |
|
||||
| `BIND_ADDRESS` | HTTP server address:port | :8080 | No |
|
||||
| `PUBLIC_URL` | Public URL for RSS feed links | <http://localhost:8080> | No |
|
||||
| `LOG_LEVEL` | Logging level (debug, info, warn, error) | info | No |
|
||||
|
||||
## Docker Compose
|
||||
## Endpoints
|
||||
|
||||
Create a `docker-compose.yml` file:
|
||||
- `GET /rss` - RSS feed of recordings (for podcast apps)
|
||||
- `GET /recordings/` - Direct access to stored recordings
|
||||
|
||||
```yaml
|
||||
---
|
||||
services:
|
||||
icecast-ripper:
|
||||
image: ghcr.io/kemko/icecast-ripper:master
|
||||
ports:
|
||||
- "8080:8080"
|
||||
environment:
|
||||
- STREAM_URL=http://example.com/stream
|
||||
- CHECK_INTERVAL=1m
|
||||
- RECORDINGS_PATH=/records
|
||||
- TEMP_PATH=/app/temp
|
||||
- SERVER_ADDRESS=:8080
|
||||
- RSS_FEED_URL=http://localhost:8080/rss
|
||||
- LOG_LEVEL=info
|
||||
volumes:
|
||||
- ./records:/records
|
||||
- ./temp:/app/temp
|
||||
- ./data:/app/data
|
||||
```
|
||||
## Building From Source
|
||||
|
||||
Run with:
|
||||
Requires Go 1.22 or higher:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
git clone https://github.com/kemko/icecast-ripper.git
|
||||
cd icecast-ripper
|
||||
make build
|
||||
```
|
||||
|
||||
## Usage
|
||||
## How It Works
|
||||
|
||||
1. Start the application with the required configuration
|
||||
2. The application will monitor the stream at the specified interval
|
||||
3. When the stream becomes active, recording starts automatically
|
||||
4. Access the RSS feed at `http://localhost:8080/rss` (or the configured URL)
|
||||
5. Access the recordings directly via the web interface
|
||||
|
||||
## API Endpoints
|
||||
|
||||
- `GET /` - Lists all recordings
|
||||
- `GET /rss` - RSS feed of recordings
|
||||
- `GET /recordings/{filename}` - Download a specific recording
|
||||
1. The application checks if the specified Icecast stream is live
|
||||
2. When the stream is detected as live, recording begins
|
||||
3. Recording continues until the stream ends or is interrupted
|
||||
4. Recordings are saved with timestamps in the configured directory
|
||||
5. The RSS feed is automatically updated with new recordings
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@@ -20,63 +20,66 @@ import (
|
||||
"github.com/kemko/icecast-ripper/internal/streamchecker"
|
||||
)
|
||||
|
||||
const version = "0.2.0"
|
||||
const version = "0.3.0"
|
||||
|
||||
func main() {
|
||||
if err := run(); err != nil {
|
||||
_, err := fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func run() error {
|
||||
// Load configuration
|
||||
// Load and validate configuration
|
||||
cfg, err := config.LoadConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error loading configuration: %w", err)
|
||||
return fmt.Errorf("configuration error: %w", err)
|
||||
}
|
||||
|
||||
// Setup logger
|
||||
logger.Setup(cfg.LogLevel)
|
||||
// Setup logger with text format for better human readability
|
||||
logger.Setup(cfg.LogLevel, logger.Text)
|
||||
slog.Info("Starting icecast-ripper", "version", version)
|
||||
|
||||
// Validate essential configuration
|
||||
if cfg.StreamURL == "" {
|
||||
return fmt.Errorf("configuration error: STREAM_URL must be set")
|
||||
}
|
||||
|
||||
// Extract stream name for identification
|
||||
streamName := extractStreamName(cfg.StreamURL)
|
||||
slog.Info("Using stream name for identification", "name", streamName)
|
||||
slog.Info("Using stream identifier", "name", streamName)
|
||||
|
||||
// Create main context that cancels on shutdown signal
|
||||
// Create shutdown context
|
||||
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
|
||||
defer stop()
|
||||
|
||||
// Create common User-Agent for all HTTP requests
|
||||
userAgent := fmt.Sprintf("icecast-ripper/%s", version)
|
||||
|
||||
// Initialize components
|
||||
streamChecker := streamchecker.New(cfg.StreamURL)
|
||||
recorderInstance, err := recorder.New(cfg.TempPath, cfg.RecordingsPath, streamName)
|
||||
streamChecker := streamchecker.New(
|
||||
cfg.StreamURL,
|
||||
streamchecker.WithUserAgent(userAgent),
|
||||
)
|
||||
|
||||
recorderInstance, err := recorder.New(
|
||||
cfg.TempPath,
|
||||
cfg.RecordingsPath,
|
||||
streamName,
|
||||
recorder.WithUserAgent(userAgent),
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to initialize recorder: %w", err)
|
||||
return fmt.Errorf("recorder initialization failed: %w", err)
|
||||
}
|
||||
|
||||
rssGenerator := rss.New(cfg, "Icecast Recordings", "Recordings from stream: "+cfg.StreamURL, streamName)
|
||||
feedTitle := "Icecast Recordings"
|
||||
feedDesc := "Recordings from stream: " + cfg.StreamURL
|
||||
rssGenerator := rss.New(cfg, feedTitle, feedDesc, streamName)
|
||||
|
||||
schedulerInstance := scheduler.New(cfg.CheckInterval, streamChecker, recorderInstance)
|
||||
httpServer := server.New(cfg, rssGenerator)
|
||||
|
||||
// Start services
|
||||
slog.Info("Starting services...")
|
||||
|
||||
// Start the scheduler which will check for streams and record them
|
||||
schedulerInstance.Start(ctx)
|
||||
|
||||
// Start the HTTP server for RSS feed
|
||||
if err := httpServer.Start(); err != nil {
|
||||
stop() // Cancel context before returning
|
||||
return fmt.Errorf("failed to start HTTP server: %w", err)
|
||||
return fmt.Errorf("HTTP server failed to start: %w", err)
|
||||
}
|
||||
|
||||
slog.Info("Application started successfully. Press Ctrl+C to shut down.")
|
||||
@@ -85,17 +88,14 @@ func run() error {
|
||||
<-ctx.Done()
|
||||
slog.Info("Shutting down application...")
|
||||
|
||||
// Graceful shutdown
|
||||
// Graceful shutdown with timeout
|
||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer shutdownCancel()
|
||||
|
||||
// First stop the scheduler to prevent new recordings
|
||||
// Stop components in reverse order of dependency
|
||||
schedulerInstance.Stop()
|
||||
|
||||
// Then stop any ongoing recording
|
||||
recorderInstance.StopRecording()
|
||||
|
||||
// Finally, stop the HTTP server
|
||||
if err := httpServer.Stop(shutdownCtx); err != nil {
|
||||
slog.Warn("HTTP server shutdown error", "error", err)
|
||||
}
|
||||
@@ -104,7 +104,7 @@ func run() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// extractStreamName extracts a meaningful identifier from the URL
|
||||
// extractStreamName derives a meaningful identifier from the stream URL
|
||||
func extractStreamName(streamURL string) string {
|
||||
parsedURL, err := url.Parse(streamURL)
|
||||
if err != nil {
|
||||
|
||||
1
go.mod
1
go.mod
@@ -17,6 +17,7 @@ require (
|
||||
github.com/spf13/cast v1.7.1 // indirect
|
||||
github.com/spf13/pflag v1.0.6 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/tcolgate/mp3 v0.0.0-20170426193717-e79c5a46d300 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
golang.org/x/sys v0.32.0 // indirect
|
||||
golang.org/x/text v0.24.0 // indirect
|
||||
|
||||
2
go.sum
2
go.sum
@@ -36,6 +36,8 @@ github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOf
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
github.com/tcolgate/mp3 v0.0.0-20170426193717-e79c5a46d300 h1:XQdibLKagjdevRB6vAjVY4qbSr8rQ610YzTkWcxzxSI=
|
||||
github.com/tcolgate/mp3 v0.0.0-20170426193717-e79c5a46d300/go.mod h1:FNa/dfN95vAYCNFrIKRrlRo+MBLbwmR9Asa5f2ljmBI=
|
||||
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
|
||||
|
||||
@@ -1,20 +1,21 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
// Config stores all configuration for the application
|
||||
// Config stores application configuration loaded from environment variables
|
||||
type Config struct {
|
||||
StreamURL string `mapstructure:"STREAM_URL"`
|
||||
CheckInterval time.Duration `mapstructure:"CHECK_INTERVAL"`
|
||||
RecordingsPath string `mapstructure:"RECORDINGS_PATH"`
|
||||
TempPath string `mapstructure:"TEMP_PATH"`
|
||||
BindAddress string `mapstructure:"BIND_ADDRESS"`
|
||||
PublicUrl string `mapstructure:"PUBLIC_URL"`
|
||||
LogLevel string `mapstructure:"LOG_LEVEL"`
|
||||
StreamURL string `mapstructure:"STREAM_URL"` // URL of the Icecast stream to record
|
||||
CheckInterval time.Duration `mapstructure:"CHECK_INTERVAL"` // How often to check if the stream is live
|
||||
RecordingsPath string `mapstructure:"RECORDINGS_PATH"` // Where to store recordings
|
||||
TempPath string `mapstructure:"TEMP_PATH"` // Where to store temporary files during recording
|
||||
BindAddress string `mapstructure:"BIND_ADDRESS"` // HTTP server address:port
|
||||
PublicURL string `mapstructure:"PUBLIC_URL"` // Public-facing URL for RSS feed links
|
||||
LogLevel string `mapstructure:"LOG_LEVEL"` // Logging level (debug, info, warn, error)
|
||||
}
|
||||
|
||||
// LoadConfig reads configuration from environment variables
|
||||
@@ -39,7 +40,12 @@ func LoadConfig() (*Config, error) {
|
||||
|
||||
var config Config
|
||||
if err := v.Unmarshal(&config); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("failed to parse configuration: %w", err)
|
||||
}
|
||||
|
||||
// Validate required fields
|
||||
if config.StreamURL == "" {
|
||||
return nil, fmt.Errorf("STREAM_URL is required")
|
||||
}
|
||||
|
||||
return &config, nil
|
||||
|
||||
@@ -4,8 +4,6 @@ import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
)
|
||||
@@ -26,20 +24,5 @@ func GenerateGUID(streamName string, recordedAt time.Time, filePath string) stri
|
||||
hasher.Write([]byte(input))
|
||||
guid := hex.EncodeToString(hasher.Sum(nil))
|
||||
|
||||
slog.Debug("Generated GUID", "input", input, "guid", guid)
|
||||
return guid
|
||||
}
|
||||
|
||||
// GenerateFileHash is maintained for backwards compatibility
|
||||
// Uses file metadata instead of content
|
||||
func GenerateFileHash(filePath string) (string, error) {
|
||||
fileInfo, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to stat file %s: %w", filePath, err)
|
||||
}
|
||||
|
||||
streamName := filepath.Base(filepath.Dir(filePath))
|
||||
recordedAt := fileInfo.ModTime()
|
||||
|
||||
return GenerateGUID(streamName, recordedAt, filePath), nil
|
||||
}
|
||||
|
||||
@@ -6,18 +6,43 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Setup initializes the structured logger with the specified log level
|
||||
func Setup(logLevel string) {
|
||||
// Format represents the output format for logs
|
||||
type Format string
|
||||
|
||||
const (
|
||||
// JSON outputs logs in JSON format for machine readability
|
||||
JSON Format = "json"
|
||||
// Text outputs logs in a human-readable format
|
||||
Text Format = "text"
|
||||
)
|
||||
|
||||
// Setup initializes the structured logger with the specified log level and format
|
||||
func Setup(logLevel string, format ...Format) {
|
||||
level := parseLogLevel(logLevel)
|
||||
|
||||
handler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
|
||||
// Default to JSON format if not specified
|
||||
logFormat := JSON
|
||||
if len(format) > 0 {
|
||||
logFormat = format[0]
|
||||
}
|
||||
|
||||
var handler slog.Handler
|
||||
|
||||
switch logFormat {
|
||||
case Text:
|
||||
handler = slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
|
||||
Level: level,
|
||||
})
|
||||
default: // JSON
|
||||
handler = slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
|
||||
Level: level,
|
||||
})
|
||||
}
|
||||
|
||||
logger := slog.New(handler)
|
||||
slog.SetDefault(logger)
|
||||
|
||||
slog.Info("Logger initialized", "level", level.String())
|
||||
slog.Debug("Logger initialized", "level", level.String(), "format", string(logFormat))
|
||||
}
|
||||
|
||||
// parseLogLevel converts a string log level to slog.Level
|
||||
|
||||
55
internal/mp3util/mp3util.go
Normal file
55
internal/mp3util/mp3util.go
Normal file
@@ -0,0 +1,55 @@
|
||||
// Package mp3util provides utilities for working with MP3 files
|
||||
package mp3util
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/tcolgate/mp3"
|
||||
)
|
||||
|
||||
// GetDuration returns the actual duration of an MP3 file by analyzing its frames
|
||||
func GetDuration(filePath string) (time.Duration, error) {
|
||||
file, err := os.Open(filePath)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed to open MP3 file: %w", err)
|
||||
}
|
||||
defer func(file *os.File) {
|
||||
err := file.Close()
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to close file: %v", err)
|
||||
}
|
||||
}(file)
|
||||
|
||||
decoder := mp3.NewDecoder(file)
|
||||
var frame mp3.Frame
|
||||
var skipped int
|
||||
var totalSamples int
|
||||
sampleRate := 0
|
||||
|
||||
// Process the frames to calculate the total duration
|
||||
for {
|
||||
if err := decoder.Decode(&frame, &skipped); err != nil {
|
||||
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) {
|
||||
break
|
||||
}
|
||||
return 0, fmt.Errorf("failed to decode MP3 frame: %w", err)
|
||||
}
|
||||
|
||||
if sampleRate == 0 {
|
||||
sampleRate = frame.Samples()
|
||||
}
|
||||
|
||||
totalSamples += frame.Samples()
|
||||
}
|
||||
|
||||
if totalSamples == 0 || sampleRate == 0 {
|
||||
return 0, errors.New("could not determine MP3 duration")
|
||||
}
|
||||
|
||||
durationSeconds := float64(totalSamples) / float64(sampleRate)
|
||||
return time.Duration(durationSeconds * float64(time.Second)), nil
|
||||
}
|
||||
@@ -23,22 +23,40 @@ type Recorder struct {
|
||||
isRecording bool
|
||||
cancelFunc context.CancelFunc
|
||||
streamName string
|
||||
userAgent string
|
||||
}
|
||||
|
||||
// Option represents a functional option for configuring the recorder
|
||||
type Option func(*Recorder)
|
||||
|
||||
// WithUserAgent sets a custom User-Agent string for HTTP requests
|
||||
func WithUserAgent(userAgent string) Option {
|
||||
return func(r *Recorder) {
|
||||
r.userAgent = userAgent
|
||||
}
|
||||
}
|
||||
|
||||
// New creates a recorder instance
|
||||
func New(tempPath, recordingsPath string, streamName string) (*Recorder, error) {
|
||||
func New(tempPath, recordingsPath string, streamName string, opts ...Option) (*Recorder, error) {
|
||||
for _, dir := range []string{tempPath, recordingsPath} {
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create directory %s: %w", dir, err)
|
||||
}
|
||||
}
|
||||
|
||||
return &Recorder{
|
||||
r := &Recorder{
|
||||
tempPath: tempPath,
|
||||
recordingsPath: recordingsPath,
|
||||
streamName: streamName,
|
||||
client: &http.Client{Timeout: 0}, // No timeout, rely on context cancellation
|
||||
}, nil
|
||||
client: &http.Client{Timeout: 0}, // No timeout for long-running downloads
|
||||
}
|
||||
|
||||
// Apply any provided options
|
||||
for _, opt := range opts {
|
||||
opt(r)
|
||||
}
|
||||
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// IsRecording returns whether a recording is currently in progress
|
||||
@@ -90,19 +108,19 @@ func (r *Recorder) recordStream(ctx context.Context, streamURL string) {
|
||||
r.mu.Unlock()
|
||||
slog.Info("Recording process finished")
|
||||
|
||||
// Only clean up temp file if it was successfully moved to final location
|
||||
if tempFilePath != "" && moveSuccessful {
|
||||
if _, err := os.Stat(tempFilePath); err == nil {
|
||||
slog.Debug("Cleaning up temporary file", "path", tempFilePath)
|
||||
if err := os.Remove(tempFilePath); err != nil {
|
||||
slog.Error("Failed to remove temporary file", "error", err)
|
||||
if tempFilePath != "" && !moveSuccessful {
|
||||
slog.Warn("Temporary file preserved for inspection", "path", tempFilePath)
|
||||
return
|
||||
}
|
||||
|
||||
if tempFilePath != "" {
|
||||
if err := cleanupTempFile(tempFilePath); err != nil {
|
||||
slog.Error("Failed to remove temporary file", "path", tempFilePath, "error", err)
|
||||
}
|
||||
} else if tempFilePath != "" && !moveSuccessful {
|
||||
slog.Warn("Temporary file preserved for manual inspection", "path", tempFilePath)
|
||||
}
|
||||
}()
|
||||
|
||||
// Create temp file for recording
|
||||
tempFile, err := os.CreateTemp(r.tempPath, "recording-*.tmp")
|
||||
if err != nil {
|
||||
slog.Error("Failed to create temporary file", "error", err)
|
||||
@@ -120,7 +138,7 @@ func (r *Recorder) recordStream(ctx context.Context, streamURL string) {
|
||||
}
|
||||
}
|
||||
|
||||
// Handle context cancellation or download errors
|
||||
// Handle errors and early termination
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
slog.Info("Recording stopped via cancellation")
|
||||
@@ -137,30 +155,96 @@ func (r *Recorder) recordStream(ctx context.Context, streamURL string) {
|
||||
}
|
||||
|
||||
// Process successful recording
|
||||
endTime := time.Now()
|
||||
duration := endTime.Sub(startTime)
|
||||
finalPath := r.generateFinalPath(startTime)
|
||||
moveSuccessful = r.moveToFinalLocation(tempFilePath, finalPath)
|
||||
|
||||
if moveSuccessful {
|
||||
duration := time.Since(startTime)
|
||||
slog.Info("Recording saved", "path", finalPath, "size", bytesWritten, "duration", duration)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Recorder) generateFinalPath(startTime time.Time) string {
|
||||
finalFilename := fmt.Sprintf("%s_%s.mp3", r.streamName, startTime.Format("20060102_150405"))
|
||||
finalFilename = sanitizeFilename(finalFilename)
|
||||
finalPath := filepath.Join(r.recordingsPath, finalFilename)
|
||||
return filepath.Join(r.recordingsPath, finalFilename)
|
||||
}
|
||||
|
||||
func (r *Recorder) moveToFinalLocation(tempPath, finalPath string) bool {
|
||||
// Try rename first (fastest)
|
||||
if err := os.Rename(tempFilePath, finalPath); err != nil {
|
||||
slog.Warn("Failed to move recording with rename, trying copy fallback", "error", err)
|
||||
if err := os.Rename(tempPath, finalPath); err == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
// Fallback to manual copy
|
||||
if err := copyFile(tempFilePath, finalPath); err != nil {
|
||||
if err := copyFile(tempPath, finalPath); err != nil {
|
||||
slog.Error("Failed to move recording to final location", "error", err)
|
||||
return
|
||||
return false
|
||||
}
|
||||
|
||||
// Copy successful, mark for cleanup
|
||||
moveSuccessful = true
|
||||
slog.Info("Recording copied successfully using fallback method", "path", finalPath)
|
||||
} else {
|
||||
moveSuccessful = true
|
||||
slog.Info("Recording copied successfully using fallback method")
|
||||
return true
|
||||
}
|
||||
|
||||
func (r *Recorder) downloadStream(ctx context.Context, streamURL string, writer io.Writer) (int64, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, streamURL, nil)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
req.Header.Set("User-Agent", r.userAgent)
|
||||
|
||||
resp, err := r.client.Do(req)
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
return 0, err
|
||||
}
|
||||
return 0, fmt.Errorf("failed to connect to stream: %w", err)
|
||||
}
|
||||
defer func(Body io.ReadCloser) {
|
||||
err := Body.Close()
|
||||
if err != nil {
|
||||
slog.Error("Failed to close response body", "error", err)
|
||||
}
|
||||
}(resp.Body)
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return 0, fmt.Errorf("unexpected status code: %s", resp.Status)
|
||||
}
|
||||
|
||||
slog.Info("Recording saved", "path", finalPath, "size", bytesWritten, "duration", duration)
|
||||
slog.Debug("Connected to stream, downloading", "url", streamURL)
|
||||
bytesWritten, err := io.Copy(writer, resp.Body)
|
||||
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
slog.Info("Stream download cancelled")
|
||||
return bytesWritten, ctx.Err()
|
||||
}
|
||||
|
||||
// Handle common stream disconnections gracefully
|
||||
if isNormalDisconnect(err) {
|
||||
slog.Info("Stream disconnected normally", "bytesWritten", bytesWritten)
|
||||
return bytesWritten, nil
|
||||
}
|
||||
|
||||
return bytesWritten, fmt.Errorf("failed during stream copy: %w", err)
|
||||
}
|
||||
|
||||
slog.Info("Stream download finished normally", "bytesWritten", bytesWritten)
|
||||
return bytesWritten, nil
|
||||
}
|
||||
|
||||
func isNormalDisconnect(err error) bool {
|
||||
return errors.Is(err, io.ErrUnexpectedEOF) ||
|
||||
strings.Contains(err.Error(), "connection reset by peer") ||
|
||||
strings.Contains(err.Error(), "broken pipe")
|
||||
}
|
||||
|
||||
func cleanupTempFile(path string) error {
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
slog.Debug("Cleaning up temporary file", "path", path)
|
||||
return os.Remove(path)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyFile copies a file from src to dst
|
||||
@@ -194,54 +278,6 @@ func copyFile(src, dst string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *Recorder) downloadStream(ctx context.Context, streamURL string, writer io.Writer) (int64, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, streamURL, nil)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
req.Header.Set("User-Agent", "icecast-ripper/1.0")
|
||||
|
||||
resp, err := r.client.Do(req)
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
return 0, err
|
||||
}
|
||||
return 0, fmt.Errorf("failed to connect to stream: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := resp.Body.Close(); err != nil {
|
||||
slog.Error("Failed to close response body", "error", err)
|
||||
}
|
||||
}()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return 0, fmt.Errorf("unexpected status code: %s", resp.Status)
|
||||
}
|
||||
|
||||
slog.Debug("Connected to stream, downloading", "url", streamURL)
|
||||
bytesWritten, err := io.Copy(writer, resp.Body)
|
||||
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
slog.Info("Stream download cancelled")
|
||||
return bytesWritten, ctx.Err()
|
||||
}
|
||||
|
||||
// Handle common stream disconnections gracefully
|
||||
if errors.Is(err, io.ErrUnexpectedEOF) ||
|
||||
strings.Contains(err.Error(), "connection reset by peer") ||
|
||||
strings.Contains(err.Error(), "broken pipe") {
|
||||
slog.Info("Stream disconnected normally", "bytesWritten", bytesWritten)
|
||||
return bytesWritten, nil
|
||||
}
|
||||
|
||||
return bytesWritten, fmt.Errorf("failed during stream copy: %w", err)
|
||||
}
|
||||
|
||||
slog.Info("Stream download finished normally", "bytesWritten", bytesWritten)
|
||||
return bytesWritten, nil
|
||||
}
|
||||
|
||||
func sanitizeFilename(filename string) string {
|
||||
replacer := strings.NewReplacer(
|
||||
" ", "_",
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/gorilla/feeds"
|
||||
"github.com/kemko/icecast-ripper/internal/config"
|
||||
"github.com/kemko/icecast-ripper/internal/hash"
|
||||
"github.com/kemko/icecast-ripper/internal/mp3util"
|
||||
)
|
||||
|
||||
// RecordingInfo contains metadata about a recording
|
||||
@@ -24,9 +25,9 @@ type RecordingInfo struct {
|
||||
RecordedAt time.Time
|
||||
}
|
||||
|
||||
// Generator creates RSS feeds
|
||||
// Generator creates RSS feeds for recorded streams
|
||||
type Generator struct {
|
||||
baseUrl string
|
||||
baseURL string
|
||||
recordingsPath string
|
||||
feedTitle string
|
||||
feedDesc string
|
||||
@@ -35,15 +36,15 @@ type Generator struct {
|
||||
|
||||
// New creates a new RSS Generator instance
|
||||
func New(cfg *config.Config, title, description, streamName string) *Generator {
|
||||
baseUrl := cfg.PublicUrl
|
||||
baseURL := cfg.PublicURL
|
||||
|
||||
// Ensure base URL ends with a slash
|
||||
if !strings.HasSuffix(baseUrl, "/") {
|
||||
baseUrl += "/"
|
||||
if !strings.HasSuffix(baseURL, "/") {
|
||||
baseURL += "/"
|
||||
}
|
||||
|
||||
return &Generator{
|
||||
baseUrl: cfg.PublicUrl,
|
||||
baseURL: baseURL,
|
||||
recordingsPath: cfg.RecordingsPath,
|
||||
feedTitle: title,
|
||||
feedDesc: description,
|
||||
@@ -54,7 +55,7 @@ func New(cfg *config.Config, title, description, streamName string) *Generator {
|
||||
// Pattern to extract timestamp from recording filename (stream.somesite.com_20240907_195622.mp3)
|
||||
var recordingPattern = regexp.MustCompile(`([^_]+)_(\d{8}_\d{6})\.mp3$`)
|
||||
|
||||
// GenerateFeed produces the RSS feed XML as a byte slice
|
||||
// GenerateFeed produces the RSS feed XML
|
||||
func (g *Generator) GenerateFeed(maxItems int) ([]byte, error) {
|
||||
recordings, err := g.scanRecordings(maxItems)
|
||||
if err != nil {
|
||||
@@ -63,15 +64,27 @@ func (g *Generator) GenerateFeed(maxItems int) ([]byte, error) {
|
||||
|
||||
feed := &feeds.Feed{
|
||||
Title: g.feedTitle,
|
||||
Link: &feeds.Link{Href: g.baseUrl},
|
||||
Link: &feeds.Link{Href: g.baseURL},
|
||||
Description: g.feedDesc,
|
||||
Created: time.Now(),
|
||||
}
|
||||
|
||||
feed.Items = make([]*feeds.Item, 0, len(recordings))
|
||||
feed.Items = g.createFeedItems(recordings)
|
||||
|
||||
baseURL := g.baseUrl
|
||||
baseURL = strings.TrimSuffix(baseURL, "/")
|
||||
rssFeed, err := feed.ToRss()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to generate RSS feed: %w", err)
|
||||
}
|
||||
|
||||
slog.Debug("RSS feed generated", "itemCount", len(feed.Items))
|
||||
return []byte(rssFeed), nil
|
||||
}
|
||||
|
||||
// createFeedItems converts recording info to RSS feed items
|
||||
func (g *Generator) createFeedItems(recordings []RecordingInfo) []*feeds.Item {
|
||||
items := make([]*feeds.Item, 0, len(recordings))
|
||||
|
||||
baseURL := strings.TrimSuffix(g.baseURL, "/")
|
||||
|
||||
for _, rec := range recordings {
|
||||
fileURL := fmt.Sprintf("%s/recordings/%s", baseURL, rec.Filename)
|
||||
@@ -89,46 +102,29 @@ func (g *Generator) GenerateFeed(maxItems int) ([]byte, error) {
|
||||
Type: "audio/mpeg",
|
||||
},
|
||||
}
|
||||
feed.Items = append(feed.Items, item)
|
||||
items = append(items, item)
|
||||
}
|
||||
|
||||
rssFeed, err := feed.ToRss()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to generate RSS feed: %w", err)
|
||||
}
|
||||
|
||||
slog.Debug("RSS feed generated", "itemCount", len(feed.Items))
|
||||
return []byte(rssFeed), nil
|
||||
return items
|
||||
}
|
||||
|
||||
// scanRecordings scans the recordings directory and returns metadata about the files
|
||||
// scanRecordings scans the recordings directory and returns metadata
|
||||
func (g *Generator) scanRecordings(maxItems int) ([]RecordingInfo, error) {
|
||||
var recordings []RecordingInfo
|
||||
|
||||
err := filepath.WalkDir(g.recordingsPath, func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
if err != nil || d.IsDir() || !strings.HasSuffix(strings.ToLower(d.Name()), ".mp3") {
|
||||
return err
|
||||
}
|
||||
|
||||
// Skip directories
|
||||
if d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Only process mp3 files
|
||||
if !strings.HasSuffix(strings.ToLower(d.Name()), ".mp3") {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Extract timestamp from filename
|
||||
matches := recordingPattern.FindStringSubmatch(d.Name())
|
||||
if len(matches) < 3 {
|
||||
// Skip files not matching our pattern
|
||||
slog.Debug("Skipping non-conforming filename", "filename", d.Name())
|
||||
return nil
|
||||
}
|
||||
|
||||
// Parse the timestamp (now in the 3rd capture group [2])
|
||||
// Parse the timestamp from the filename
|
||||
timestamp, err := time.Parse("20060102_150405", matches[2])
|
||||
if err != nil {
|
||||
slog.Warn("Failed to parse timestamp from filename", "filename", d.Name(), "error", err)
|
||||
@@ -141,9 +137,13 @@ func (g *Generator) scanRecordings(maxItems int) ([]RecordingInfo, error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Calculate an estimated duration based on file size
|
||||
// Assuming ~128kbps MP3 bitrate: 16KB per second
|
||||
estimatedDuration := time.Duration(info.Size()/16000) * time.Second
|
||||
// Get the actual duration from the MP3 file
|
||||
duration, err := mp3util.GetDuration(path)
|
||||
if err != nil {
|
||||
slog.Warn("Failed to get MP3 duration, estimating", "filename", d.Name(), "error", err)
|
||||
// Estimate: ~128kbps MP3 bitrate = 16KB per second
|
||||
duration = time.Duration(info.Size()/16000) * time.Second
|
||||
}
|
||||
|
||||
// Generate a stable hash for the recording
|
||||
filename := filepath.Base(path)
|
||||
@@ -153,7 +153,7 @@ func (g *Generator) scanRecordings(maxItems int) ([]RecordingInfo, error) {
|
||||
Filename: filename,
|
||||
Hash: fileHash,
|
||||
FileSize: info.Size(),
|
||||
Duration: estimatedDuration,
|
||||
Duration: duration,
|
||||
RecordedAt: timestamp,
|
||||
})
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/kemko/icecast-ripper/internal/streamchecker"
|
||||
)
|
||||
|
||||
// Scheduler periodically checks if a stream is live and starts recording
|
||||
type Scheduler struct {
|
||||
interval time.Duration
|
||||
checker *streamchecker.Checker
|
||||
@@ -20,6 +21,7 @@ type Scheduler struct {
|
||||
parentContext context.Context
|
||||
}
|
||||
|
||||
// New creates a scheduler instance
|
||||
func New(interval time.Duration, checker *streamchecker.Checker, recorder *recorder.Recorder) *Scheduler {
|
||||
return &Scheduler{
|
||||
interval: interval,
|
||||
@@ -29,6 +31,7 @@ func New(interval time.Duration, checker *streamchecker.Checker, recorder *recor
|
||||
}
|
||||
}
|
||||
|
||||
// Start begins the scheduling process
|
||||
func (s *Scheduler) Start(ctx context.Context) {
|
||||
slog.Info("Starting scheduler", "interval", s.interval.String())
|
||||
s.parentContext = ctx
|
||||
@@ -36,17 +39,19 @@ func (s *Scheduler) Start(ctx context.Context) {
|
||||
go s.run()
|
||||
}
|
||||
|
||||
// Stop gracefully shuts down the scheduler
|
||||
func (s *Scheduler) Stop() {
|
||||
s.stopOnce.Do(func() {
|
||||
slog.Info("Stopping scheduler...")
|
||||
close(s.stopChan)
|
||||
s.wg.Wait()
|
||||
slog.Info("Scheduler stopped.")
|
||||
slog.Info("Scheduler stopped")
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Scheduler) run() {
|
||||
defer s.wg.Done()
|
||||
|
||||
ticker := time.NewTicker(s.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
@@ -58,10 +63,9 @@ func (s *Scheduler) run() {
|
||||
case <-ticker.C:
|
||||
s.checkAndRecord()
|
||||
case <-s.stopChan:
|
||||
slog.Info("Scheduler run loop exiting.")
|
||||
return
|
||||
case <-s.parentContext.Done():
|
||||
slog.Info("Parent context cancelled, stopping scheduler.")
|
||||
slog.Info("Parent context cancelled, stopping scheduler")
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -69,12 +73,11 @@ func (s *Scheduler) run() {
|
||||
|
||||
func (s *Scheduler) checkAndRecord() {
|
||||
if s.recorder.IsRecording() {
|
||||
slog.Debug("Recording in progress, skipping stream check.")
|
||||
slog.Debug("Recording in progress, skipping stream check")
|
||||
return
|
||||
}
|
||||
|
||||
slog.Debug("Checking stream status")
|
||||
isLive, err := s.checker.IsLive()
|
||||
isLive, err := s.checker.IsLiveWithContext(s.parentContext)
|
||||
if err != nil {
|
||||
slog.Warn("Error checking stream status", "error", err)
|
||||
return
|
||||
|
||||
@@ -43,7 +43,7 @@ func New(cfg *config.Config, rssGenerator *rss.Generator) *Server {
|
||||
fileServer := http.FileServer(http.Dir(absRecordingsPath))
|
||||
mux.Handle("GET /recordings/", http.StripPrefix("/recordings/", fileServer))
|
||||
|
||||
// Configure server with sensible timeouts
|
||||
// Configure server with timeouts for robustness
|
||||
s.server = &http.Server{
|
||||
Addr: cfg.BindAddress,
|
||||
Handler: mux,
|
||||
@@ -68,7 +68,6 @@ func (s *Server) handleRSS(w http.ResponseWriter, _ *http.Request) {
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/rss+xml; charset=utf-8")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
if _, err = w.Write(feedBytes); err != nil {
|
||||
slog.Error("Failed to write RSS response", "error", err)
|
||||
}
|
||||
@@ -90,6 +89,5 @@ func (s *Server) Start() error {
|
||||
// Stop gracefully shuts down the HTTP server
|
||||
func (s *Server) Stop(ctx context.Context) error {
|
||||
slog.Info("Stopping HTTP server")
|
||||
|
||||
return s.server.Shutdown(ctx)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package streamchecker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
@@ -10,26 +11,51 @@ import (
|
||||
type Checker struct {
|
||||
streamURL string
|
||||
client *http.Client
|
||||
userAgent string
|
||||
}
|
||||
|
||||
func New(streamURL string) *Checker {
|
||||
return &Checker{
|
||||
// Option represents a functional option for configuring the Checker
|
||||
type Option func(*Checker)
|
||||
|
||||
// WithUserAgent sets a custom User-Agent header
|
||||
func WithUserAgent(userAgent string) Option {
|
||||
return func(c *Checker) {
|
||||
c.userAgent = userAgent
|
||||
}
|
||||
}
|
||||
|
||||
// New creates a new stream checker with sensible defaults
|
||||
func New(streamURL string, opts ...Option) *Checker {
|
||||
c := &Checker{
|
||||
streamURL: streamURL,
|
||||
client: &http.Client{
|
||||
Timeout: 10 * time.Second,
|
||||
},
|
||||
}
|
||||
|
||||
// Apply any provided options
|
||||
for _, opt := range opts {
|
||||
opt(c)
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// IsLive checks if the stream is currently broadcasting
|
||||
func (c *Checker) IsLive() (bool, error) {
|
||||
return c.IsLiveWithContext(context.Background())
|
||||
}
|
||||
|
||||
// IsLiveWithContext checks if the stream is live using the provided context
|
||||
func (c *Checker) IsLiveWithContext(ctx context.Context) (bool, error) {
|
||||
slog.Debug("Checking stream status", "url", c.streamURL)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, c.streamURL, nil)
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, c.streamURL, nil)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("User-Agent", "icecast-ripper/1.0")
|
||||
req.Header.Set("User-Agent", c.userAgent)
|
||||
|
||||
resp, err := c.client.Do(req)
|
||||
if err != nil {
|
||||
@@ -51,6 +77,7 @@ func (c *Checker) IsLive() (bool, error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// GetStreamURL returns the URL being monitored
|
||||
func (c *Checker) GetStreamURL() string {
|
||||
return c.streamURL
|
||||
}
|
||||
|
||||
2
vendor/github.com/tcolgate/mp3/.gitignore
generated
vendored
Normal file
2
vendor/github.com/tcolgate/mp3/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
*.mp3
|
||||
!internal/data/*.mp3
|
||||
20
vendor/github.com/tcolgate/mp3/LICENSE
generated
vendored
Normal file
20
vendor/github.com/tcolgate/mp3/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Tristan Colgate-McFarlane and badgerodon
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
6
vendor/github.com/tcolgate/mp3/README.md
generated
vendored
Normal file
6
vendor/github.com/tcolgate/mp3/README.md
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
MP3
|
||||
|
||||
Stream orientated mp3 frame decoder
|
||||
|
||||
[](https://godoc.org/github.com/tcolgate/mp3)
|
||||
|
||||
31
vendor/github.com/tcolgate/mp3/doc.go
generated
vendored
Normal file
31
vendor/github.com/tcolgate/mp3/doc.go
generated
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
// The MIT License (MIT)
|
||||
//
|
||||
// Copyright (c) 2015 Tristan Colgate-McFarlane and badgerodon
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
// this software and associated documentation files (the "Software"), to deal in
|
||||
// the Software without restriction, including without limitation the rights to
|
||||
// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
// the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
// subject to the following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// Package mp3 provides decoding of mp3 files into their underlying frames. It
|
||||
// is primarily intended for streaming tasks with minimal internal buffering
|
||||
// and no requirement to seek.
|
||||
//
|
||||
// The implementation started as a reworking of github.com/badgerodon/mp3, and
|
||||
// has also drawn from Konrad Windszus' excellent article on mp3 frame parsing
|
||||
// http://www.codeproject.com/Articles/8295/MPEG-Audio-Frame-Header
|
||||
//
|
||||
// TODO CRC isn't currently checked.
|
||||
package mp3
|
||||
16
vendor/github.com/tcolgate/mp3/framechannelmode_string.go
generated
vendored
Normal file
16
vendor/github.com/tcolgate/mp3/framechannelmode_string.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
// Code generated by "stringer -type=FrameChannelMode"; DO NOT EDIT
|
||||
|
||||
package mp3
|
||||
|
||||
import "fmt"
|
||||
|
||||
const _FrameChannelMode_name = "StereoJointStereoDualChannelSingleChannelChannelModeMax"
|
||||
|
||||
var _FrameChannelMode_index = [...]uint8{0, 6, 17, 28, 41, 55}
|
||||
|
||||
func (i FrameChannelMode) String() string {
|
||||
if i >= FrameChannelMode(len(_FrameChannelMode_index)-1) {
|
||||
return fmt.Sprintf("FrameChannelMode(%d)", i)
|
||||
}
|
||||
return _FrameChannelMode_name[_FrameChannelMode_index[i]:_FrameChannelMode_index[i+1]]
|
||||
}
|
||||
16
vendor/github.com/tcolgate/mp3/frameemphasis_string.go
generated
vendored
Normal file
16
vendor/github.com/tcolgate/mp3/frameemphasis_string.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
// Code generated by "stringer -type=FrameEmphasis"; DO NOT EDIT
|
||||
|
||||
package mp3
|
||||
|
||||
import "fmt"
|
||||
|
||||
const _FrameEmphasis_name = "EmphNoneEmph5015EmphReservedEmphCCITJ17EmphMax"
|
||||
|
||||
var _FrameEmphasis_index = [...]uint8{0, 8, 16, 28, 39, 46}
|
||||
|
||||
func (i FrameEmphasis) String() string {
|
||||
if i >= FrameEmphasis(len(_FrameEmphasis_index)-1) {
|
||||
return fmt.Sprintf("FrameEmphasis(%d)", i)
|
||||
}
|
||||
return _FrameEmphasis_name[_FrameEmphasis_index[i]:_FrameEmphasis_index[i+1]]
|
||||
}
|
||||
16
vendor/github.com/tcolgate/mp3/framelayer_string.go
generated
vendored
Normal file
16
vendor/github.com/tcolgate/mp3/framelayer_string.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
// Code generated by "stringer -type=FrameLayer"; DO NOT EDIT
|
||||
|
||||
package mp3
|
||||
|
||||
import "fmt"
|
||||
|
||||
const _FrameLayer_name = "LayerReservedLayer3Layer2Layer1LayerMax"
|
||||
|
||||
var _FrameLayer_index = [...]uint8{0, 13, 19, 25, 31, 39}
|
||||
|
||||
func (i FrameLayer) String() string {
|
||||
if i >= FrameLayer(len(_FrameLayer_index)-1) {
|
||||
return fmt.Sprintf("FrameLayer(%d)", i)
|
||||
}
|
||||
return _FrameLayer_name[_FrameLayer_index[i]:_FrameLayer_index[i+1]]
|
||||
}
|
||||
448
vendor/github.com/tcolgate/mp3/frames.go
generated
vendored
Normal file
448
vendor/github.com/tcolgate/mp3/frames.go
generated
vendored
Normal file
@@ -0,0 +1,448 @@
|
||||
package mp3
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
type (
|
||||
// Decoder translates a io.Reader into a series of frames
|
||||
Decoder struct {
|
||||
src io.Reader
|
||||
err error
|
||||
}
|
||||
|
||||
// Frame represents one individual mp3 frame
|
||||
Frame struct {
|
||||
buf []byte
|
||||
}
|
||||
|
||||
// FrameHeader represents the entire header of a frame
|
||||
FrameHeader []byte
|
||||
|
||||
// FrameVersion is the MPEG version given in the frame header
|
||||
FrameVersion byte
|
||||
// FrameLayer is the MPEG layer given in the frame header
|
||||
FrameLayer byte
|
||||
// FrameEmphasis is the Emphasis value from the frame header
|
||||
FrameEmphasis byte
|
||||
// FrameChannelMode is the Channel mode from the frame header
|
||||
FrameChannelMode byte
|
||||
// FrameBitRate is the bit rate from the frame header
|
||||
FrameBitRate int
|
||||
// FrameSampleRate is the sample rate from teh frame header
|
||||
FrameSampleRate int
|
||||
|
||||
// FrameSideInfo holds the SideInfo bytes from the frame
|
||||
FrameSideInfo []byte
|
||||
)
|
||||
|
||||
//go:generate stringer -type=FrameVersion
|
||||
const (
|
||||
MPEG25 FrameVersion = iota
|
||||
MPEGReserved
|
||||
MPEG2
|
||||
MPEG1
|
||||
VERSIONMAX
|
||||
)
|
||||
|
||||
//go:generate stringer -type=FrameLayer
|
||||
const (
|
||||
LayerReserved FrameLayer = iota
|
||||
Layer3
|
||||
Layer2
|
||||
Layer1
|
||||
LayerMax
|
||||
)
|
||||
|
||||
//go:generate stringer -type=FrameEmphasis
|
||||
const (
|
||||
EmphNone FrameEmphasis = iota
|
||||
Emph5015
|
||||
EmphReserved
|
||||
EmphCCITJ17
|
||||
EmphMax
|
||||
)
|
||||
|
||||
//go:generate stringer -type=FrameChannelMode
|
||||
const (
|
||||
Stereo FrameChannelMode = iota
|
||||
JointStereo
|
||||
DualChannel
|
||||
SingleChannel
|
||||
ChannelModeMax
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrInvalidBitrate indicates that the header information did not contain a recognized bitrate
|
||||
ErrInvalidBitrate FrameBitRate = -1
|
||||
)
|
||||
|
||||
var (
|
||||
bitrates = [VERSIONMAX][LayerMax][15]int{
|
||||
{ // MPEG 2.5
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // LayerReserved
|
||||
{0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160}, // Layer3
|
||||
{0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160}, // Layer2
|
||||
{0, 32, 48, 56, 64, 80, 96, 112, 128, 144, 160, 176, 192, 224, 256}, // Layer1
|
||||
},
|
||||
{ // Reserved
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // LayerReserved
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // Layer3
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // Layer2
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // Layer1
|
||||
},
|
||||
{ // MPEG 2
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // LayerReserved
|
||||
{0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160}, // Layer3
|
||||
{0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160}, // Layer2
|
||||
{0, 32, 48, 56, 64, 80, 96, 112, 128, 144, 160, 176, 192, 224, 256}, // Layer1
|
||||
},
|
||||
{ // MPEG 1
|
||||
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, // LayerReserved
|
||||
{0, 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320}, // Layer3
|
||||
{0, 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384}, // Layer2
|
||||
{0, 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416, 448}, // Layer1
|
||||
},
|
||||
}
|
||||
sampleRates = [int(VERSIONMAX)][3]int{
|
||||
{11025, 12000, 8000}, //MPEG25
|
||||
{0, 0, 0}, //MPEGReserved
|
||||
{22050, 24000, 16000}, //MPEG2
|
||||
{44100, 48000, 32000}, //MPEG1
|
||||
}
|
||||
|
||||
// ErrInvalidSampleRate indicates that no samplerate could be found for the frame header provided
|
||||
ErrInvalidSampleRate = FrameSampleRate(-1)
|
||||
|
||||
samplesPerFrame = [VERSIONMAX][LayerMax]int{
|
||||
{ // MPEG25
|
||||
0,
|
||||
576,
|
||||
1152,
|
||||
384,
|
||||
},
|
||||
{ // Reserved
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
},
|
||||
{ // MPEG2
|
||||
0,
|
||||
576,
|
||||
1152,
|
||||
384,
|
||||
},
|
||||
{ // MPEG1
|
||||
0,
|
||||
1152,
|
||||
1152,
|
||||
384,
|
||||
},
|
||||
}
|
||||
slotSize = [LayerMax]int{
|
||||
0, // LayerReserved
|
||||
1, // Layer3
|
||||
1, // Layer2
|
||||
4, // Layer1
|
||||
}
|
||||
|
||||
// ErrNoSyncBits implies we could not find a valid frame header sync bit before EOF
|
||||
ErrNoSyncBits = errors.New("EOF before sync bits found")
|
||||
|
||||
// ErrPrematureEOF indicates that the filed ended before a complete frame could be read
|
||||
ErrPrematureEOF = errors.New("EOF mid stream")
|
||||
)
|
||||
|
||||
func init() {
|
||||
bitrates[MPEG25] = bitrates[MPEG2]
|
||||
samplesPerFrame[MPEG25] = samplesPerFrame[MPEG2]
|
||||
}
|
||||
|
||||
// NewDecoder returns a decoder that will process the provided reader.
|
||||
func NewDecoder(r io.Reader) *Decoder {
|
||||
return &Decoder{r, nil}
|
||||
}
|
||||
|
||||
// fill slice d until it is of len l, using bytes from reader r
|
||||
func fillbuf(d []byte, r io.Reader, l int) (res []byte, err error) {
|
||||
if len(d) >= l {
|
||||
// we already have enough bytes
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// How many bytes do we need to fetch
|
||||
missing := l - len(d)
|
||||
|
||||
// Does d have sufficient capacity? if not extent it
|
||||
if cap(d) < l {
|
||||
if d == nil {
|
||||
d = make([]byte, l)
|
||||
} else {
|
||||
il := len(d)
|
||||
d = d[:cap(d)] // stretch d to it's full capacity
|
||||
d = append(d, make([]byte, l-cap(d))...)
|
||||
d = d[:il] //we've extended the capm reset len
|
||||
}
|
||||
}
|
||||
|
||||
d = d[:l]
|
||||
_, err = io.ReadFull(r, d[len(d)-missing:])
|
||||
|
||||
return d, err
|
||||
}
|
||||
|
||||
// Decode reads the next complete discovered frame into the provided
|
||||
// Frame struct. A count of skipped bytes will be written to skipped.
|
||||
func (d *Decoder) Decode(v *Frame, skipped *int) (err error) {
|
||||
// Truncate the array
|
||||
v.buf = v.buf[:0]
|
||||
|
||||
hLen := 4
|
||||
// locate a sync frame
|
||||
*skipped = 0
|
||||
for {
|
||||
v.buf, err = fillbuf(v.buf, d.src, hLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if v.buf[0] == 0xFF && (v.buf[1]&0xE0 == 0xE0) &&
|
||||
v.Header().Emphasis() != EmphReserved &&
|
||||
v.Header().Layer() != LayerReserved &&
|
||||
v.Header().Version() != MPEGReserved &&
|
||||
v.Header().SampleRate() != -1 &&
|
||||
v.Header().BitRate() != -1 {
|
||||
break
|
||||
}
|
||||
switch {
|
||||
case v.buf[1] == 0xFF:
|
||||
v.buf = v.buf[1:]
|
||||
*skipped++
|
||||
default:
|
||||
v.buf = v.buf[2:]
|
||||
*skipped += 2
|
||||
}
|
||||
}
|
||||
|
||||
crcLen := 0
|
||||
if v.Header().Protection() {
|
||||
crcLen = 2
|
||||
v.buf, err = fillbuf(v.buf, d.src, hLen+crcLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
sideLen, err := v.SideInfoLength()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
v.buf, err = fillbuf(v.buf, d.src, hLen+crcLen+sideLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dataLen := v.Size()
|
||||
v.buf, err = fillbuf(v.buf, d.src, dataLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SideInfoLength retursn the expected side info length for this
|
||||
// mp3 frame
|
||||
func (f *Frame) SideInfoLength() (int, error) {
|
||||
switch f.Header().Version() {
|
||||
case MPEG1:
|
||||
switch f.Header().ChannelMode() {
|
||||
case SingleChannel:
|
||||
return 17, nil
|
||||
case Stereo, JointStereo, DualChannel:
|
||||
return 32, nil
|
||||
default:
|
||||
return 0, errors.New("bad channel mode")
|
||||
}
|
||||
case MPEG2, MPEG25:
|
||||
switch f.Header().ChannelMode() {
|
||||
case SingleChannel:
|
||||
return 9, nil
|
||||
case Stereo, JointStereo, DualChannel:
|
||||
return 17, nil
|
||||
default:
|
||||
return 0, errors.New("bad channel mode")
|
||||
}
|
||||
default:
|
||||
return 0, fmt.Errorf("bad version (%v)", f.Header().Version())
|
||||
}
|
||||
}
|
||||
|
||||
// Header returns the header for this frame
|
||||
func (f *Frame) Header() FrameHeader {
|
||||
return FrameHeader(f.buf[0:4])
|
||||
}
|
||||
|
||||
// CRC returns the CRC word stored in this frame
|
||||
func (f *Frame) CRC() (uint16, error) {
|
||||
var crc uint16
|
||||
if !f.Header().Protection() {
|
||||
return 0, nil
|
||||
}
|
||||
crcdata := bytes.NewReader(f.buf[4:6])
|
||||
err := binary.Read(crcdata, binary.BigEndian, &crc)
|
||||
return crc, err
|
||||
}
|
||||
|
||||
// SideInfo returns the side info for this frame
|
||||
func (f *Frame) SideInfo() FrameSideInfo {
|
||||
if f.Header().Protection() {
|
||||
return FrameSideInfo(f.buf[6:])
|
||||
}
|
||||
return FrameSideInfo(f.buf[4:])
|
||||
}
|
||||
|
||||
// Frame returns a string describing this frame, header and side info
|
||||
func (f *Frame) String() string {
|
||||
str := ""
|
||||
str += fmt.Sprintf("Header: \n%s", f.Header())
|
||||
str += fmt.Sprintf("SideInfo: \n%s", f.SideInfo())
|
||||
crc, err := f.CRC()
|
||||
str += fmt.Sprintf("CRC: %x (err: %v)\n", crc, err)
|
||||
str += fmt.Sprintf("Samples: %v\n", f.Samples())
|
||||
str += fmt.Sprintf("Size: %v\n", f.Size())
|
||||
str += fmt.Sprintf("Duration: %v\n", f.Duration())
|
||||
return str
|
||||
}
|
||||
|
||||
// Version returns the MPEG version from the header
|
||||
func (h FrameHeader) Version() FrameVersion {
|
||||
return FrameVersion((h[1] >> 3) & 0x03)
|
||||
}
|
||||
|
||||
// Layer returns the MPEG layer from the header
|
||||
func (h FrameHeader) Layer() FrameLayer {
|
||||
return FrameLayer((h[1] >> 1) & 0x03)
|
||||
}
|
||||
|
||||
// Protection indicates if there is a CRC present after the header (before the side data)
|
||||
func (h FrameHeader) Protection() bool {
|
||||
return (h[1] & 0x01) != 0x01
|
||||
}
|
||||
|
||||
// BitRate returns the calculated bit rate from the header
|
||||
func (h FrameHeader) BitRate() FrameBitRate {
|
||||
bitrateIdx := (h[2] >> 4) & 0x0F
|
||||
if bitrateIdx == 0x0F {
|
||||
return ErrInvalidBitrate
|
||||
}
|
||||
br := bitrates[h.Version()][h.Layer()][bitrateIdx] * 1000
|
||||
if br == 0 {
|
||||
return ErrInvalidBitrate
|
||||
}
|
||||
return FrameBitRate(br)
|
||||
}
|
||||
|
||||
// SampleRate returns the samplerate from the header
|
||||
func (h FrameHeader) SampleRate() FrameSampleRate {
|
||||
sri := (h[2] >> 2) & 0x03
|
||||
if sri == 0x03 {
|
||||
return ErrInvalidSampleRate
|
||||
}
|
||||
return FrameSampleRate(sampleRates[h.Version()][sri])
|
||||
}
|
||||
|
||||
// Pad returns the pad bit, indicating if there are extra samples
|
||||
// in this frame to make up the correct bitrate
|
||||
func (h FrameHeader) Pad() bool {
|
||||
return ((h[2] >> 1) & 0x01) == 0x01
|
||||
}
|
||||
|
||||
// Private retrusn the Private bit from the header
|
||||
func (h FrameHeader) Private() bool {
|
||||
return (h[2] & 0x01) == 0x01
|
||||
}
|
||||
|
||||
// ChannelMode returns the channel mode from the header
|
||||
func (h FrameHeader) ChannelMode() FrameChannelMode {
|
||||
return FrameChannelMode((h[3] >> 6) & 0x03)
|
||||
}
|
||||
|
||||
// CopyRight returns the CopyRight bit from the header
|
||||
func (h FrameHeader) CopyRight() bool {
|
||||
return (h[3]>>3)&0x01 == 0x01
|
||||
}
|
||||
|
||||
// Original returns the "original content" bit from the header
|
||||
func (h FrameHeader) Original() bool {
|
||||
return (h[3]>>2)&0x01 == 0x01
|
||||
}
|
||||
|
||||
// Emphasis returns the Emphasis from the header
|
||||
func (h FrameHeader) Emphasis() FrameEmphasis {
|
||||
return FrameEmphasis((h[3] & 0x03))
|
||||
}
|
||||
|
||||
// String dumps the frame header as a string for display purposes
|
||||
func (h FrameHeader) String() string {
|
||||
str := ""
|
||||
str += fmt.Sprintf(" Layer: %v\n", h.Layer())
|
||||
str += fmt.Sprintf(" Version: %v\n", h.Version())
|
||||
str += fmt.Sprintf(" Protection: %v\n", h.Protection())
|
||||
str += fmt.Sprintf(" BitRate: %v\n", h.BitRate())
|
||||
str += fmt.Sprintf(" SampleRate: %v\n", h.SampleRate())
|
||||
str += fmt.Sprintf(" Pad: %v\n", h.Pad())
|
||||
str += fmt.Sprintf(" Private: %v\n", h.Private())
|
||||
str += fmt.Sprintf(" ChannelMode: %v\n", h.ChannelMode())
|
||||
str += fmt.Sprintf(" CopyRight: %v\n", h.CopyRight())
|
||||
str += fmt.Sprintf(" Original: %v\n", h.Original())
|
||||
str += fmt.Sprintf(" Emphasis: %v\n", h.Emphasis())
|
||||
return str
|
||||
}
|
||||
|
||||
// NDataBegin is the number of bytes before the frame header at which the sample data begins
|
||||
// 0 indicates that the data begins after the side channel information. This data is the
|
||||
// data from the "bit reservoir" and can be up to 511 bytes
|
||||
func (i FrameSideInfo) NDataBegin() uint16 {
|
||||
return (uint16(i[0]) << 1 & (uint16(i[1]) >> 7))
|
||||
}
|
||||
|
||||
// Samples determines the number of samples based on the MPEG version and Layer from the header
|
||||
func (f *Frame) Samples() int {
|
||||
return samplesPerFrame[f.Header().Version()][f.Header().Layer()]
|
||||
}
|
||||
|
||||
// Size clculates the expected size of this frame in bytes based on the header
|
||||
// information
|
||||
func (f *Frame) Size() int {
|
||||
bps := float64(f.Samples()) / 8
|
||||
fsize := (bps * float64(f.Header().BitRate())) / float64(f.Header().SampleRate())
|
||||
if f.Header().Pad() {
|
||||
fsize += float64(slotSize[f.Header().Layer()])
|
||||
}
|
||||
return int(fsize)
|
||||
}
|
||||
|
||||
// Duration calculates the time duration of this frame based on the samplerate and number of samples
|
||||
func (f *Frame) Duration() time.Duration {
|
||||
ms := (1000 / float64(f.Header().SampleRate())) * float64(f.Samples())
|
||||
return time.Duration(int(float64(time.Millisecond) * ms))
|
||||
}
|
||||
|
||||
// String renders the side info as a string for display purposes
|
||||
func (i FrameSideInfo) String() string {
|
||||
str := ""
|
||||
str += fmt.Sprintf(" NDataBegin: %v\n", i.NDataBegin())
|
||||
return str
|
||||
}
|
||||
|
||||
// Reader returns an io.Reader that reads the individual bytes from the frame
|
||||
func (f *Frame) Reader() io.Reader {
|
||||
return bytes.NewReader(f.buf)
|
||||
}
|
||||
16
vendor/github.com/tcolgate/mp3/frameversion_string.go
generated
vendored
Normal file
16
vendor/github.com/tcolgate/mp3/frameversion_string.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
// Code generated by "stringer -type=FrameVersion"; DO NOT EDIT
|
||||
|
||||
package mp3
|
||||
|
||||
import "fmt"
|
||||
|
||||
const _FrameVersion_name = "MPEG25MPEGReservedMPEG2MPEG1VERSIONMAX"
|
||||
|
||||
var _FrameVersion_index = [...]uint8{0, 6, 18, 23, 28, 38}
|
||||
|
||||
func (i FrameVersion) String() string {
|
||||
if i >= FrameVersion(len(_FrameVersion_index)-1) {
|
||||
return fmt.Sprintf("FrameVersion(%d)", i)
|
||||
}
|
||||
return _FrameVersion_name[_FrameVersion_index[i]:_FrameVersion_index[i+1]]
|
||||
}
|
||||
83
vendor/github.com/tcolgate/mp3/header.go
generated
vendored
Normal file
83
vendor/github.com/tcolgate/mp3/header.go
generated
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
package mp3
|
||||
|
||||
/*
|
||||
func (this *FrameHeader) Parse(bs []byte) error {
|
||||
this.Size = 0
|
||||
this.Samples = 0
|
||||
this.Duration = 0
|
||||
|
||||
if len(bs) < 4 {
|
||||
return fmt.Errorf("not enough bytes")
|
||||
}
|
||||
if bs[0] != 0xFF || (bs[1]&0xE0) != 0xE0 {
|
||||
return fmt.Errorf("missing sync word, got: %x, %x", bs[0], bs[1])
|
||||
}
|
||||
this.Version = Version((bs[1] >> 3) & 0x03)
|
||||
if this.Version == MPEGReserved {
|
||||
return fmt.Errorf("reserved mpeg version")
|
||||
}
|
||||
|
||||
this.Layer = Layer(((bs[1] >> 1) & 0x03))
|
||||
if this.Layer == LayerReserved {
|
||||
return fmt.Errorf("reserved layer")
|
||||
}
|
||||
|
||||
this.Protection = (bs[1] & 0x01) != 0x01
|
||||
|
||||
bitrateIdx := (bs[2] >> 4) & 0x0F
|
||||
if bitrateIdx == 0x0F {
|
||||
return fmt.Errorf("invalid bitrate: %v", bitrateIdx)
|
||||
}
|
||||
this.Bitrate = bitrates[this.Version][this.Layer][bitrateIdx] * 1000
|
||||
if this.Bitrate == 0 {
|
||||
return fmt.Errorf("invalid bitrate: %v", bitrateIdx)
|
||||
}
|
||||
|
||||
sampleRateIdx := (bs[2] >> 2) & 0x03
|
||||
if sampleRateIdx == 0x03 {
|
||||
return fmt.Errorf("invalid sample rate: %v", sampleRateIdx)
|
||||
}
|
||||
this.SampleRate = sampleRates[this.Version][sampleRateIdx]
|
||||
|
||||
this.Pad = ((bs[2] >> 1) & 0x01) == 0x01
|
||||
|
||||
this.Private = (bs[2] & 0x01) == 0x01
|
||||
|
||||
this.ChannelMode = ChannelMode(bs[3]>>6) & 0x03
|
||||
|
||||
// todo: mode extension
|
||||
|
||||
this.CopyRight = (bs[3]>>3)&0x01 == 0x01
|
||||
|
||||
this.Original = (bs[3]>>2)&0x01 == 0x01
|
||||
|
||||
this.Emphasis = Emphasis(bs[3] & 0x03)
|
||||
if this.Emphasis == EmphReserved {
|
||||
return fmt.Errorf("reserved emphasis")
|
||||
}
|
||||
|
||||
this.Size = this.size()
|
||||
this.Samples = this.samples()
|
||||
this.Duration = this.duration()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (this *FrameHeader) samples() int {
|
||||
return samplesPerFrame[this.Version][this.Layer]
|
||||
}
|
||||
|
||||
func (this *FrameHeader) size() int64 {
|
||||
bps := float64(this.samples()) / 8
|
||||
fsize := (bps * float64(this.Bitrate)) / float64(this.SampleRate)
|
||||
if this.Pad {
|
||||
fsize += float64(slotSize[this.Layer])
|
||||
}
|
||||
return int64(fsize)
|
||||
}
|
||||
|
||||
func (this *FrameHeader) duration() time.Duration {
|
||||
ms := (1000 / float64(this.SampleRate)) * float64(this.samples())
|
||||
return time.Duration(time.Duration(float64(time.Millisecond) * ms))
|
||||
}
|
||||
*/
|
||||
250
vendor/github.com/tcolgate/mp3/internal/data/bindata.go
generated
vendored
Normal file
250
vendor/github.com/tcolgate/mp3/internal/data/bindata.go
generated
vendored
Normal file
@@ -0,0 +1,250 @@
|
||||
package data
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unsafe"
|
||||
"os"
|
||||
"time"
|
||||
"io/ioutil"
|
||||
"path"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func bindata_read(data, name string) ([]byte, error) {
|
||||
var empty [0]byte
|
||||
sx := (*reflect.StringHeader)(unsafe.Pointer(&data))
|
||||
b := empty[:]
|
||||
bx := (*reflect.SliceHeader)(unsafe.Pointer(&b))
|
||||
bx.Data = sx.Data
|
||||
bx.Len = len(data)
|
||||
bx.Cap = bx.Len
|
||||
|
||||
gz, err := gzip.NewReader(bytes.NewBuffer(b))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Read %q: %v", name, err)
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
_, err = io.Copy(&buf, gz)
|
||||
gz.Close()
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Read %q: %v", name, err)
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
type asset struct {
|
||||
bytes []byte
|
||||
info os.FileInfo
|
||||
}
|
||||
|
||||
type bindata_file_info struct {
|
||||
name string
|
||||
size int64
|
||||
mode os.FileMode
|
||||
modTime time.Time
|
||||
}
|
||||
|
||||
func (fi bindata_file_info) Name() string {
|
||||
return fi.name
|
||||
}
|
||||
func (fi bindata_file_info) Size() int64 {
|
||||
return fi.size
|
||||
}
|
||||
func (fi bindata_file_info) Mode() os.FileMode {
|
||||
return fi.mode
|
||||
}
|
||||
func (fi bindata_file_info) ModTime() time.Time {
|
||||
return fi.modTime
|
||||
}
|
||||
func (fi bindata_file_info) IsDir() bool {
|
||||
return false
|
||||
}
|
||||
func (fi bindata_file_info) Sys() interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
var _silent_1frame_go = "\x1f\x8b\x08\x00\x00\x09\x6e\x88\x00\xff\x64\x90\x41\x4b\x3b\x31\x10\xc5\xcf\x3b\x9f\xe2\xfd\xf7\xb4\x85\x7f\x1b\xaa\x17\x11\x7a\x50\xc1\x8b\x47\x8f\x22\x92\xee\xce\xa6\xa1\x9b\x99\x90\xa4\x4a\x91\x7e\x77\x77\xe3\x45\xf1\x10\x02\x33\x6f\x7e\xef\xcd\x44\xdb\x1f\xad\x63\x0c\xb6\x58\x22\x1f\xa2\xa6\x82\x76\x52\xd7\x12\xbd\xdb\x84\x8e\x1a\x63\xf0\xec\x27\x96\x72\x7f\x2e\x9c\xe1\x33\xca\x81\x91\xec\x47\x1d\xc2\x98\x34\xc0\x62\x7b\x75\x83\xa7\xbd\xc9\xd0\x11\x93\x0d\x0c\x96\x5e\x07\x1e\x20\x5a\x0e\x5e\x9c\x70\xce\xd4\xfc\x04\xbd\xbc\xee\xe7\x9f\x56\x44\xc6\x38\xbd\x75\x2c\x9c\x6c\x61\x38\x5d\xef\xbd\x54\xf6\x3a\x1e\xdd\xb7\xcb\x5a\x34\x70\xe8\x35\x9e\xb1\x31\x34\x9e\xa4\x87\x17\x5f\xba\x15\x3e\xa9\x59\x82\x72\xaa\x4f\xd3\x2f\x93\xff\xb5\xbe\xc3\x5d\xce\x5c\xba\x76\x41\x99\x5c\xdb\x6f\xdb\x31\xcd\x31\x37\x21\x5e\xb7\x2b\x6a\xfc\x58\x95\xff\x76\x10\x3f\x2d\xcc\x66\xbe\xc1\xe6\x71\xd6\x4f\x63\xd7\x3e\xe8\x69\xaa\x9b\x40\x23\x0b\xfe\x10\x60\x17\xfe\xc2\xb9\xd0\x85\xbe\x02\x00\x00\xff\xff\x0e\x9e\x37\x70\x54\x01\x00\x00"
|
||||
|
||||
func silent_1frame_go_bytes() ([]byte, error) {
|
||||
return bindata_read(
|
||||
_silent_1frame_go,
|
||||
"silent_1frame.go",
|
||||
)
|
||||
}
|
||||
|
||||
func silent_1frame_go() (*asset, error) {
|
||||
bytes, err := silent_1frame_go_bytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindata_file_info{name: "silent_1frame.go", size: 340, mode: os.FileMode(420), modTime: time.Unix(1424984811, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
var _silent_1frame_mp3 = "\x1f\x8b\x08\x00\x00\x09\x6e\x88\x00\xff\xfa\xff\x7b\x43\x0a\x03\xff\x07\x06\x86\x4c\x06\x06\x06\x0e\x06\x06\x5e\x05\x06\x06\x46\x20\x5a\x02\xe4\x02\x99\x26\x0d\x0c\x0c\x2c\x3e\x8e\xbe\xae\xc6\x7a\x96\x96\x7a\xa6\x0c\xa3\x60\x14\x50\x08\x00\x01\x00\x00\xff\xff\xa1\x6f\x84\x53\x72\x02\x00\x00"
|
||||
|
||||
func silent_1frame_mp3_bytes() ([]byte, error) {
|
||||
return bindata_read(
|
||||
_silent_1frame_mp3,
|
||||
"silent_1frame.mp3",
|
||||
)
|
||||
}
|
||||
|
||||
func silent_1frame_mp3() (*asset, error) {
|
||||
bytes, err := silent_1frame_mp3_bytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindata_file_info{name: "silent_1frame.mp3", size: 626, mode: os.FileMode(420), modTime: time.Unix(1424763406, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// Asset loads and returns the asset for the given name.
|
||||
// It returns an error if the asset could not be found or
|
||||
// could not be loaded.
|
||||
func Asset(name string) ([]byte, error) {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
if f, ok := _bindata[cannonicalName]; ok {
|
||||
a, err := f()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Asset %s can't read by error: %v", name, err)
|
||||
}
|
||||
return a.bytes, nil
|
||||
}
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
|
||||
// AssetInfo loads and returns the asset info for the given name.
|
||||
// It returns an error if the asset could not be found or
|
||||
// could not be loaded.
|
||||
func AssetInfo(name string) (os.FileInfo, error) {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
if f, ok := _bindata[cannonicalName]; ok {
|
||||
a, err := f()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("AssetInfo %s can't read by error: %v", name, err)
|
||||
}
|
||||
return a.info, nil
|
||||
}
|
||||
return nil, fmt.Errorf("AssetInfo %s not found", name)
|
||||
}
|
||||
|
||||
// AssetNames returns the names of the assets.
|
||||
func AssetNames() []string {
|
||||
names := make([]string, 0, len(_bindata))
|
||||
for name := range _bindata {
|
||||
names = append(names, name)
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
||||
// _bindata is a table, holding each asset generator, mapped to its name.
|
||||
var _bindata = map[string]func() (*asset, error){
|
||||
"silent_1frame.go": silent_1frame_go,
|
||||
"silent_1frame.mp3": silent_1frame_mp3,
|
||||
}
|
||||
|
||||
// AssetDir returns the file names below a certain
|
||||
// directory embedded in the file by go-bindata.
|
||||
// For example if you run go-bindata on data/... and data contains the
|
||||
// following hierarchy:
|
||||
// data/
|
||||
// foo.txt
|
||||
// img/
|
||||
// a.png
|
||||
// b.png
|
||||
// then AssetDir("data") would return []string{"foo.txt", "img"}
|
||||
// AssetDir("data/img") would return []string{"a.png", "b.png"}
|
||||
// AssetDir("foo.txt") and AssetDir("notexist") would return an error
|
||||
// AssetDir("") will return []string{"data"}.
|
||||
func AssetDir(name string) ([]string, error) {
|
||||
node := _bintree
|
||||
if len(name) != 0 {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
pathList := strings.Split(cannonicalName, "/")
|
||||
for _, p := range pathList {
|
||||
node = node.Children[p]
|
||||
if node == nil {
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
if node.Func != nil {
|
||||
return nil, fmt.Errorf("Asset %s not found", name)
|
||||
}
|
||||
rv := make([]string, 0, len(node.Children))
|
||||
for name := range node.Children {
|
||||
rv = append(rv, name)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
type _bintree_t struct {
|
||||
Func func() (*asset, error)
|
||||
Children map[string]*_bintree_t
|
||||
}
|
||||
var _bintree = &_bintree_t{nil, map[string]*_bintree_t{
|
||||
"silent_1frame.go": &_bintree_t{silent_1frame_go, map[string]*_bintree_t{
|
||||
}},
|
||||
"silent_1frame.mp3": &_bintree_t{silent_1frame_mp3, map[string]*_bintree_t{
|
||||
}},
|
||||
}}
|
||||
|
||||
// Restore an asset under the given directory
|
||||
func RestoreAsset(dir, name string) error {
|
||||
data, err := Asset(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
info, err := AssetInfo(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(_filePath(dir, path.Dir(name)), os.FileMode(0755))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(_filePath(dir, name), data, info.Mode())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Chtimes(_filePath(dir, name), info.ModTime(), info.ModTime())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Restore assets under the given directory recursively
|
||||
func RestoreAssets(dir, name string) error {
|
||||
children, err := AssetDir(name)
|
||||
if err != nil { // File
|
||||
return RestoreAsset(dir, name)
|
||||
} else { // Dir
|
||||
for _, child := range children {
|
||||
err = RestoreAssets(dir, path.Join(name, child))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func _filePath(dir, name string) string {
|
||||
cannonicalName := strings.Replace(name, "\\", "/", -1)
|
||||
return filepath.Join(append([]string{dir}, strings.Split(cannonicalName, "/")...)...)
|
||||
}
|
||||
|
||||
17
vendor/github.com/tcolgate/mp3/internal/data/silent_1frame.go
generated
vendored
Normal file
17
vendor/github.com/tcolgate/mp3/internal/data/silent_1frame.go
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
package data
|
||||
|
||||
import "log"
|
||||
|
||||
var (
|
||||
// SilentBytes is the raw data from a 128 Kb/s of lame encoded nothingness
|
||||
SilentBytes []byte
|
||||
)
|
||||
|
||||
//go:generate go-bindata -pkg data -nomemcopy ./
|
||||
func init() {
|
||||
var err error
|
||||
SilentBytes, err = Asset("silent_1frame.mp3")
|
||||
if err != nil {
|
||||
log.Fatalf("Could not open silent_1frame.mp3 asset")
|
||||
}
|
||||
}
|
||||
BIN
vendor/github.com/tcolgate/mp3/internal/data/silent_1frame.mp3
generated
vendored
Normal file
BIN
vendor/github.com/tcolgate/mp3/internal/data/silent_1frame.mp3
generated
vendored
Normal file
Binary file not shown.
51
vendor/github.com/tcolgate/mp3/silence.go
generated
vendored
Normal file
51
vendor/github.com/tcolgate/mp3/silence.go
generated
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
package mp3
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
|
||||
"github.com/tcolgate/mp3/internal/data"
|
||||
)
|
||||
|
||||
var (
|
||||
// SilentFrame is the sound of Ripley screaming on the Nostromo, from the outside
|
||||
SilentFrame *Frame
|
||||
|
||||
// SilentBytes is the raw raw data behind SilentFrame
|
||||
SilentBytes []byte
|
||||
)
|
||||
|
||||
func init() {
|
||||
skipped := 0
|
||||
SilentBytes = data.SilentBytes
|
||||
|
||||
dec := NewDecoder(bytes.NewBuffer(SilentBytes))
|
||||
frame := Frame{}
|
||||
SilentFrame = &frame
|
||||
dec.Decode(&frame, &skipped)
|
||||
}
|
||||
|
||||
type silenceReader struct {
|
||||
int // Location into the silence frame
|
||||
}
|
||||
|
||||
func (s *silenceReader) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *silenceReader) Read(out []byte) (int, error) {
|
||||
for i := 0; i < len(out); i++ {
|
||||
out[i] = SilentBytes[s.int]
|
||||
s.int++
|
||||
if s.int >= len(SilentBytes) {
|
||||
s.int = 0
|
||||
}
|
||||
}
|
||||
|
||||
return len(out), nil
|
||||
}
|
||||
|
||||
// MakeSilence provides a constant stream of silenct frames.
|
||||
func MakeSilence() io.ReadCloser {
|
||||
return &silenceReader{0}
|
||||
}
|
||||
6
vendor/modules.txt
vendored
6
vendor/modules.txt
vendored
@@ -9,8 +9,6 @@ github.com/go-viper/mapstructure/v2/internal/errors
|
||||
# github.com/gorilla/feeds v1.2.0
|
||||
## explicit; go 1.20
|
||||
github.com/gorilla/feeds
|
||||
# github.com/mattn/go-sqlite3 v1.14.27
|
||||
## explicit; go 1.19
|
||||
# github.com/pelletier/go-toml/v2 v2.2.3
|
||||
## explicit; go 1.21.0
|
||||
github.com/pelletier/go-toml/v2
|
||||
@@ -49,6 +47,10 @@ github.com/spf13/viper/internal/features
|
||||
# github.com/subosito/gotenv v1.6.0
|
||||
## explicit; go 1.18
|
||||
github.com/subosito/gotenv
|
||||
# github.com/tcolgate/mp3 v0.0.0-20170426193717-e79c5a46d300
|
||||
## explicit
|
||||
github.com/tcolgate/mp3
|
||||
github.com/tcolgate/mp3/internal/data
|
||||
# go.uber.org/multierr v1.11.0
|
||||
## explicit; go 1.19
|
||||
go.uber.org/multierr
|
||||
|
||||
Reference in New Issue
Block a user