FineTuneRequest

@Serializable
data class FineTuneRequest(val trainingFile: FileId, val validationFile: FileId? = null, val model: ModelId? = null, val nEpochs: Int? = null, val batchSize: Int? = null, val learningRateMultiplier: Double? = null, val promptLossWeight: Double? = null, val computeClassificationMetrics: Boolean? = null, val classificationNClasses: Int? = null, val classificationPositiveClass: String? = null, val classificationBetas: List<Double>? = null, val suffix: String? = null)

Create a Fine-Tune request.

Constructors

Link copied to clipboard
constructor(trainingFile: FileId, validationFile: FileId? = null, model: ModelId? = null, nEpochs: Int? = null, batchSize: Int? = null, learningRateMultiplier: Double? = null, promptLossWeight: Double? = null, computeClassificationMetrics: Boolean? = null, classificationNClasses: Int? = null, classificationPositiveClass: String? = null, classificationBetas: List<Double>? = null, suffix: String? = null)

Properties

Link copied to clipboard
@SerialName(value = "batch_size")
val batchSize: Int? = null

The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass.

Link copied to clipboard
@SerialName(value = "classification_betas")
val classificationBetas: List<Double>? = null

If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification.

Link copied to clipboard
@SerialName(value = "classification_n_classes")
val classificationNClasses: Int? = null

The number of classes in a classification task.

Link copied to clipboard
@SerialName(value = "classification_positive_class")
val classificationPositiveClass: String? = null

The positive class in binary classification.

Link copied to clipboard
@SerialName(value = "compute_classification_metrics")
val computeClassificationMetrics: Boolean? = null

If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. These metrics can be viewed in the results file.

Link copied to clipboard
@SerialName(value = "learning_rate_multiplier")
val learningRateMultiplier: Double? = null

The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this value.

Link copied to clipboard
@SerialName(value = "model")
val model: ModelId? = null

The name of the base model to fine-tune.

Link copied to clipboard
@SerialName(value = "n_epochs")
val nEpochs: Int? = null

The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.

Link copied to clipboard
@SerialName(value = "prompt_loss_weight")
val promptLossWeight: Double? = null

The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short.

Link copied to clipboard
@SerialName(value = "suffix")
val suffix: String? = null

A string of up to 40 characters that will be added to your fine-tuned model name.

Link copied to clipboard
@SerialName(value = "training_file")
val trainingFile: FileId

The ID of an uploaded file that contains training data.

Link copied to clipboard
@SerialName(value = "validation_file")
val validationFile: FileId? = null

The ID of an uploaded file that contains validation data.