url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://mathhelpforum.com/algebra/183362-non-perfect-square-trinomials.html | # Thread: non perfect square trinomials
1. ## non perfect square trinomials
Hi;
Do some non perfect square trinomials factor.
Thanks.
2. ## Re: non perfect square trinomials
Absolutely. $\displaystyle x^2 + 5x + 4 = (x + 4)(x + 1)$ for example...
3. ## Re: non perfect square trinomials
Wow thanks for the quick responce Prove it thats all I wanted to know. | 2013-12-09 01:14:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7247443795204163, "perplexity": 6457.554617759376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163839270/warc/CC-MAIN-20131204133039-00031-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/ | Just like us, Recurrent Neural Networks (RNNs) can be very forgetful. This struggle with short-term memory causes RNNs to lose their effectiveness in most tasks. However, do not fret, Long Short-Term Memory networks (LSTMs) have great memories and can remember information which the vanilla RNN is unable to!
LSTMs are a particular variant of RNNs, therefore having a grasp of the concepts surrounding RNNs will significantly aid your understanding of LSTMs in this article. I covered the mechanism of RNNs in my previous article here.
## A quick recap on RNNs
RNNs process inputs in a sequential manner, where the context from the previous input is considered when computing the output of the current step. This allows the neural network to carry information over different time steps rather than keeping all the inputs independent of each other.
However, a significant shortcoming that plagues the typical RNN is the problem of vanishing/exploding gradients. This problem arises when back-propagating through the RNN during training, especially for networks with deeper layers. The gradients have to go through continuous matrix multiplications during the back-propagation process due to the chain rule, causing the gradient to either shrink exponentially (vanish) or blow up exponentially (explode). Having a gradient that is too small prevents the weights from updating and learning, whereas extremely large gradients cause the model to be unstable.
Due to these issues, RNNs are unable to work with longer sequences and hold on to long-term dependencies, making them suffer from “short-term memory”.
## What are LSTMs
While LSTMs are a kind of RNN and function similarly to traditional RNNs, its Gating mechanism is what sets it apart. This feature addresses the “short-term memory” problem of RNNs.
As we can see from the image, the difference lies mainly in the LSTM’s ability to preserve long-term memory. This is especially important in the majority of Natural Language Processing (NLP) or time-series and sequential tasks. For example, let’s say we have a network generating text based on some input given to us. At the start of the text, it is mentioned that the author has a “dog named Cliff”. After a few other sentences where there is no mention of a pet or dog, the author brings up his pet again, and the model has to generate the next word to "However, Cliff, my pet ____". As the word pet appeared right before the blank, a RNN can deduce that the next word will likely be an animal that can be kept as a pet.
However, due to the short-term memory, the typical RNN will only be able to use the contextual information from the text that appeared in the last few sentences - which is not useful at all. The RNN has no clue as to what animal the pet might be as the relevant information from the start of the text has already been lost.
On the other hand, the LSTM can retain the earlier information that the author has a pet dog, and this will aid the model in choosing "the dog" when it comes to generating the text at that point due to the contextual information from a much earlier time step.
## Inner workings of the LSTM
The secret sauce to the LSTM lies in its gating mechanism within each LSTM cell. In the normal RNN cell, the input at a time-step and the hidden state from the previous time step is passed through a tanh activation function to obtain a new hidden state and output.
LSTMs, on the other hand, have a slightly more complex structure. At each time step, the LSTM cell takes in 3 different pieces of information -- the current input data, the short-term memory from the previous cell (similar to hidden states in RNNs) and lastly the long-term memory.
The short-term memory is commonly referred to as the hidden state, and the long-term memory is usually known as the cell state.
The cell then uses gates to regulate the information to be kept or discarded at each time step before passing on the long-term and short-term information to the next cell.
These gates can be seen as water filters. Ideally, the role of these gates is supposed to selectively remove any irrelevant information, similar to how water filters prevent impurities from passing through. At the same time, only water and beneficial nutrients can pass through these filters, just like how the gates only hold on to the useful information. Of course, these gates need to be trained to accurately filter what is useful and what is not.
These gates are called the Input Gate, the Forget Gate, and the Output Gate. There are many variants to the names of these gates; nevertheless, the calculations and workings of these gates are mostly the same.
Let’s go through the mechanisms of these gates one-by-one.
### Input Gate
The input gate decides what new information will be stored in the long-term memory. It only works with the information from the current input and the short-term memory from the previous time step. Therefore, it has to filter out the information from these variables that are not useful.
Mathematically, this is achieved using 2 layers. The first layer can be seen as the filter which selects what information can pass through it and what information to be discarded. To create this layer, we pass the short-term memory and current input into a sigmoid function. The sigmoid function will transform the values to be between 0 and 1, with 0 indicating that part of the information is unimportant, whereas 1 indicates that the information will be used. This helps to decide the values to be kept and used, and also the values to be discarded. As the layer is being trained through back-propagation, the weights in the sigmoid function will be updated such that it learns to only let the useful pass through while discarding the less critical features.
$$i_1 = \sigma(W_{i_1} \cdot (H_{t-1}, x_t) + bias_{i_1})$$
The second layer takes the short term memory and current input as well and passes it through an activation function, usually the $$tanh$$ function, to regulate the network.
$$i_2 = tanh(W_{i_2} \cdot (H_{t-1}, x_t) + bias_{i_2})$$
The outputs from these 2 layers are then multiplied, and the final outcome represents the information to be kept in the long-term memory and used as the output.
$$i_{input} = i_1 * i_2$$
### Forget Gate
The forget gate decides which information from the long-term memory should be kept or discarded. This is done by multiplying the incoming long-term memory by a forget vector generated by the current input and incoming short-term memory.
Just like the first layer in the Input gate, the forget vector is also a selective filter layer. To obtain the forget vector, the short-term memory, and current input is passed through a sigmoid function, similar to the first layer in the Input Gate above, but with different weights. The vector will be made up of 0s and 1s and will be multiplied with the long-term memory to choose which parts of the long-term memory to retain.
$$f = \sigma(W_{forget} \cdot (H_{t-1}, x_t) + bias_{forget})$$
The outputs from the Input gate and the Forget gate will undergo a pointwise addition to give a new version of the long-term memory, which will be passed on to the next cell. This new long-term memory will also be used in the final gate, the Output gate.
$$C_t = C_{t-1} * f + i_{input}$$
### Output Gate
The output gate will take the current input, the previous short-term memory, and the newly computed long-term memory to produce the new short-term memory/hidden state which will be passed on to the cell in the next time step. The output of the current time step can also be drawn from this hidden state.
First, the previous short-term memory and current input will be passed into a sigmoid function (Yes, this is the 3rd time we’re doing this) with different weights yet again to create the third and final filter. Then, we put the new long-term memory through an activation $$tanh$$ function. The output from these 2 processes will be multiplied to produce the new short-term memory.
$$O_1 = \sigma (W_{output_1} \cdot (H_{t-1}, x_t) + bias_{output_1})$$
$$O_2 = tanh(W_{output_2} \cdot C_t + bias_{output_2})$$
$$H_t, O_t = O_1 * O_2$$
The short-term and long-term memory produced by these gates will then be carried over to the next cell for the process to be repeated. The output of each time step can be obtained from the short-term memory, also known as the hidden state.
That's all there is to the mechanisms of the typical LSTM structure. Not all that tough, eh?
## Code Implementation
With the necessary theoretical understanding of LSTMs, let's start implementing it in code. We'll be using the PyTorch library today.
Before we jump into a project with a full dataset, let's just take a look at how the PyTorch LSTM layer really works in practice by visualizing the outputs. We don't need to instantiate a model to see how the layer works. You can run this on FloydHub with the button below under LSTM_starter.ipynb. (You don’t need to run on a GPU for this portion)
import torch
import torch.nn as nn
Just like the other kinds of layers, we can instantiate an LSTM layer and provide it with the necessary arguments. The full documentation of the accepted arguments can be found here. In this example, we will only be defining the input dimension, hidden dimension, and the number of layers.
• Input dimension - represents the size of the input at each time step, e.g. input of dimension 5 will look like this [1, 3, 8, 2, 3]
• Hidden dimension - represents the size of the hidden state and cell state at each time step, e.g. the hidden state and cell state will both have the shape of [3, 5, 4] if the hidden dimension is 3
• Number of layers - the number of LSTM layers stacked on top of each other
input_dim = 5
hidden_dim = 10
n_layers = 1
lstm_layer = nn.LSTM(input_dim, hidden_dim, n_layers, batch_first=True)
Let's create some dummy data to see how the layer takes in the input. As our input dimension is 5, we have to create a tensor of the shape (1, 1, 5) which represents (batch size, sequence length, input dimension).
Additionally, we'll have to initialize a hidden state and cell state for the LSTM as this is the first cell. The hidden state and cell state is stored in a tuple with the format (hidden_state, cell_state).
batch_size = 1
seq_len = 1
inp = torch.randn(batch_size, seq_len, input_dim)
hidden_state = torch.randn(n_layers, batch_size, hidden_dim)
cell_state = torch.randn(n_layers, batch_size, hidden_dim)
hidden = (hidden_state, cell_state)
[Out]:
Input shape: (1, 1, 5)
Hidden shape: ((1, 1, 10), (1, 1, 10))
Next, we’ll feed the input and hidden states and see what we’ll get back from it.
out, hidden = lstm_layer(inp, hidden)
print("Output shape: ", out.shape)
print("Hidden: ", hidden)
[Out]: Output shape: torch.size([1, 1, 10])
Hidden: (tensor([[[ 0.1749, 0.0099, -0.3004, 0.2846, -0.2262, -0.5257, 0.2925, -0.1894, 0.1166, -0.1197]]], grad_fn=<StackBackward>), tensor([[[ 0.4167, 0.0385, -0.4982, 0.6955, -0.9213, -1.0072, 0.4426,
In the process above, we saw how the LSTM cell will process the input and hidden states at each time step. However in most cases, we'll be processing the input data in large sequences. The LSTM can also take in sequences of variable length and produce an output at each time step. Let's try changing the sequence length this time.
seq_len = 3
inp = torch.randn(batch_size, seq_len, input_dim)
out, hidden = lstm_layer(inp, hidden)
print(out.shape)
[Out]: torch.Size([1, 3, 10])
This time, the output's 2nd dimension is 3, indicating that there were 3 outputs given by the LSTM. This corresponds to the length of our input sequence. For the use cases where we'll need an output at every time step (many-to-many), such as Text Generation, the output of each time step can be extracted directly from the 2nd dimension and fed into a fully connected layer. For text classification tasks (many-to-one), such as Sentiment Analysis, the last output can be taken to be fed into a classifier.
# Obtaining the last output
out = out.squeeze()[-1, :]
print(out.shape)
[Out]: torch.Size([10])
## Project: Sentiment Analysis on Amazon Reviews
For this project, we’ll be using the Amazon customer reviews dataset which can be found on Kaggle. The dataset contains a total of 4 million reviews with each review labeled to be of either positive or negative sentiment. You can run the code implementation in this article on FloydHub using their GPUs on the cloud by clicking the following link and using the main.ipynb notebook.
This will speed up the training process significantly. Alternatively, the link to the GitHub repository can be found here.
Our goal at the time of this implementation will be to create an LSTM model that can accurately classify and distinguish the sentiment of a review. To do so, we’ll have to start with some data-preprocessing, defining and training the model, followed by assessing the model.
The process flow of our implementation looks like this.
We will only be using 1 million reviews in this implementation to speed things up, however, feel free to run it yourself with the entire dataset if you have the time and computing capacity.
For our data pre-processing steps, we'll be using regex, Numpy and the NLTK (Natural Language Toolkit) library for some simple NLP helper functions. As the data is compressed in the bz2 format, we'll use the Python bz2 module to read the data.
import bz2
from collections import Counter
import re
import nltk
import numpy as np
train_file = bz2.BZ2File('../input/amazon_reviews/train.ft.txt.bz2')
test_file = bz2.BZ2File('../input/amazon_reviews/test.ft.txt.bz2')
Number of training reviews: 3600000
Number of test reviews: 400000
This dataset contains a total of 4 million reviews - 3.6 million training and 0.4 million for testing. We will be using only 800k for training and 200k for testing here -- this is still a large amount of data.
num_train = 800000 # We're training on the first 800,000 reviews in the dataset
num_test = 200000 # Using 200,000 reviews from test set
train_file = [x.decode('utf-8') for x in train_file[:num_train]]
test_file = [x.decode('utf-8') for x in test_file[:num_test]]
The format of the sentences are as such:
__label__2 Stunning even for the non-gamer: This soundtrack was beautiful! It paints the scenery in your mind so well I would recommend it even to people who hate vid. game music! I have played the game Chrono Cross but out of all of the games I have ever played it has the best music! It backs away from crude keyboarding and takes a fresher step with great guitars and soulful orchestras. It would impress anyone who cares to listen! ^_^
We'll have to extract out the labels from the sentences. The data is the format __label__1/2 <sentence>, therefore we can easily split it accordingly. Positive sentiment labels are stored as 1 and negative are stored as 0.
We will also change all URLs to a standard <url\> as the exact URL is irrelevant to the sentiment in most cases.
# Extracting labels from sentences
train_labels = [0 if x.split(' ')[0] == '__label__1' else 1 for x in train_file]
train_sentences = [x.split(' ', 1)[1][:-1].lower() for x in train_file]
test_labels = [0 if x.split(' ')[0] == '__label__1' else 1 for x in test_file]
test_sentences = [x.split(' ', 1)[1][:-1].lower() for x in test_file]
# Some simple cleaning of data
for i in range(len(train_sentences)):
train_sentences[i] = re.sub('\d','0',train_sentences[i])
for i in range(len(test_sentences)):
test_sentences[i] = re.sub('\d','0',test_sentences[i])
# Modify URLs to <url>
for i in range(len(train_sentences)):
if 'www.' in train_sentences[i] or 'http:' in train_sentences[i] or 'https:' in train_sentences[i] or '.com' in train_sentences[i]:
train_sentences[i] = re.sub(r"([^ ]+(?<=\.[a-z]{3}))", "<url>", train_sentences[i])
for i in range(len(test_sentences)):
if 'www.' in test_sentences[i] or 'http:' in test_sentences[i] or 'https:' in test_sentences[i] or '.com' in test_sentences[i]:
test_sentences[i] = re.sub(r"([^ ]+(?<=\.[a-z]{3}))", "<url>", test_sentences[i])
After quickly cleaning the data, we will do tokenization of the sentences, which is a standard NLP task.
Tokenization is the task of splitting a sentence into individual tokens, which can be words or punctuation, etc.
There are many NLP libraries that can do this, such as spaCy or Scikit-learn, but we will be using NLTK here as it has one of the faster tokenizers.
The words will then be stored in a dictionary mapping the word to its number of appearances. These words will become our vocabulary.
words = Counter() # Dictionary that will map a word to the number of times it appeared in all the training sentences
for i, sentence in enumerate(train_sentences):
# The sentences will be stored as a list of words/tokens
train_sentences[i] = []
for word in nltk.word_tokenize(sentence): # Tokenizing the words
words.update([word.lower()]) # Converting all the words to lowercase
train_sentences[i].append(word)
if i%20000 == 0:
print(str((i*100)/num_train) + "% done")
print("100% done")
• To remove typos and words that likely don't exist, we'll remove all words from the vocab that only appear once throughout.
• To account for unknown words and padding, we'll have to add them to our vocabulary as well. Each word in the vocabulary will then be assigned an integer index and after that mapped to this integer.
# Removing the words that only appear once
words = {k:v for k,v in words.items() if v>1}
# Sorting the words according to the number of appearances, with the most common word being first
words = sorted(words, key=words.get, reverse=True)
# Adding padding and unknown to our vocabulary so that they will be assigned an index
# Dictionaries to store the word to index mappings and vice versa
word2idx = {o:i for i,o in enumerate(words)}
idx2word = {i:o for i,o in enumerate(words)}
With the mappings, we'll convert the words in the sentences to their corresponding indexes.
for i, sentence in enumerate(train_sentences):
# Looking up the mapping dictionary and assigning the index to the respective words
train_sentences[i] = [word2idx[word] if word in word2idx else 0 for word in sentence]
for i, sentence in enumerate(test_sentences):
# For test sentences, we have to tokenize the sentences as well
test_sentences[i] = [word2idx[word.lower()] if word.lower() in word2idx else 0 for word in nltk.word_tokenize(sentence)]
In the last pre-processing step, we'll be padding the sentences with 0s and shortening the lengthy sentences so that the data can be trained in batches to speed things up.
# Defining a function that either shortens sentences or pads sentences with 0 to a fixed length
features = np.zeros((len(sentences), seq_len),dtype=int)
for ii, review in enumerate(sentences):
if len(review) != 0:
features[ii, -len(review):] = np.array(review)[:seq_len]
return features
seq_len = 200 # The length that the sentences will be padded/shortened to
# Converting our labels into numpy arrays
train_labels = np.array(train_labels)
test_labels = np.array(test_labels)
A padded sentence will look something like this, where 0 represents the padding:
array([ 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 44, 125, 13, 28, 1701, 5144, 60,
31, 10, 3, 44, 2052, 10, 84, 2131, 2,
5, 27, 1336, 8, 11, 125, 17, 153, 6,
5, 146, 103, 9, 2, 64, 5, 117, 14,
7, 42, 1680, 9, 194, 56, 230, 107, 2,
7, 128, 1680, 52, 31073, 41, 3243, 14, 3,
3674, 2, 11, 125, 52, 10669, 156, 2, 1103,
29, 0, 0, 6, 917, 52, 1366, 2, 31,
10, 156, 23, 2071, 3574, 2, 11, 12, 7,
2954, 9926, 125, 14, 28, 21, 2, 180, 95,
132, 147, 9, 220, 12, 52, 718, 56, 2,
2339, 5, 272, 11, 4, 72, 695, 562, 4,
722, 4, 425, 4, 163, 4, 1491, 4, 1132,
1829, 520, 31, 169, 34, 77, 18, 16, 1107,
69, 33])
Our dataset is already split into training and testing data. However, we still need a set of data for validation during training. Therefore, we will split our test data by half into a validation set and a testing set. A detailed explanation of dataset splits can be found here.
split_frac = 0.5 # 50% validation, 50% test
split_id = int(split_frac * len(test_sentences))
val_sentences, test_sentences = test_sentences[:split_id], test_sentences[split_id:]
val_labels, test_labels = test_labels[:split_id], test_labels[split_id:]
Next, this is the point where we’ll start working with the PyTorch library. We’ll first define the datasets from the sentences and labels, followed by loading them into a data loader. We set the batch size to 256. This can be tweaked according to your needs.
import torch
import torch.nn as nn
train_data = TensorDataset(torch.from_numpy(train_sentences), torch.from_numpy(train_labels))
val_data = TensorDataset(torch.from_numpy(val_sentences), torch.from_numpy(val_labels))
test_data = TensorDataset(torch.from_numpy(test_sentences), torch.from_numpy(test_labels))
batch_size = 400
We can also check if we have any GPUs to speed up our training time by many folds. If you’re using FloydHub with GPU to run this code, the training time will be significantly reduced.
# torch.cuda.is_available() checks and returns a Boolean True if a GPU is available, else it'll return False
is_cuda = torch.cuda.is_available()
# If we have a GPU available, we'll set our device to GPU. We'll use this device variable later in our code.
if is_cuda:
device = torch.device("cuda")
else:
device = torch.device("cpu")
At this point, we will be defining the architecture of the model. At this stage, we can create Neural Networks that have deep layers or a large number of LSTM layers stacked on top of each other. However, a simple model such as the one below with just an LSTM and a fully connected layer works quite well and requires much less training time. We will be training our own word embeddings in the first layer before the sentences are fed into the LSTM layer.
The final layer is a fully connected layer with a sigmoid function to classify whether the review is of positive/negative sentiment.
class SentimentNet(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
super(SentimentNet, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=drop_prob, batch_first=True)
self.dropout = nn.Dropout(drop_prob)
self.fc = nn.Linear(hidden_dim, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x, hidden):
batch_size = x.size(0)
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
out = self.sigmoid(out)
out = out.view(batch_size, -1)
out = out[:,-1]
return out, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device))
return hidden
Take note that we can actually load pre-trained word embeddings such as GloVe or fastText which can increase the model’s accuracy and decrease training time.
With this, we can instantiate our model after defining the arguments. The output dimension will only be 1 as it only needs to output 1 or 0. The learning rate, loss function and optimizer are defined as well.
vocab_size = len(word2idx) + 1
output_size = 1
embedding_dim = 400
hidden_dim = 512
n_layers = 2
model = SentimentNet(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
model.to(device)
lr=0.005
criterion = nn.BCELoss()
Finally, we can start training the model. For every 1000 steps, we’ll be checking the output of our model against the validation dataset and saving the model if it performed better than the previous time.
The state_dict is the model’s weights in PyTorch and can be loaded into a model with the same architecture at a separate time or script altogether.
epochs = 2
counter = 0
print_every = 1000
clip = 5
valid_loss_min = np.Inf
model.train()
for i in range(epochs):
h = model.init_hidden(batch_size)
counter += 1
h = tuple([e.data for e in h])
inputs, labels = inputs.to(device), labels.to(device)
output, h = model(inputs, h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
optimizer.step()
if counter%print_every == 0:
val_h = model.init_hidden(batch_size)
val_losses = []
model.eval()
val_h = tuple([each.data for each in val_h])
inp, lab = inp.to(device), lab.to(device)
out, val_h = model(inp, val_h)
val_loss = criterion(out.squeeze(), lab.float())
val_losses.append(val_loss.item())
model.train()
print("Epoch: {}/{}...".format(i+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
if np.mean(val_losses) <= valid_loss_min:
torch.save(model.state_dict(), './state_dict.pt')
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,np.mean(val_losses)))
valid_loss_min = np.mean(val_losses)
After we’re done training, it's time to test our model on a dataset it has never seen before - our test dataset. We'll first load the model weights from the point where the validation loss is the lowest.
We can calculate the accuracy of the model to see how accurate our model’s predictions are.
test_losses = []
num_correct = 0
h = model.init_hidden(batch_size)
model.eval()
h = tuple([each.data for each in h])
inputs, labels = inputs.to(device), labels.to(device)
output, h = model(inputs, h)
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
pred = torch.round(output.squeeze()) # Rounds the output to 0/1
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
print("Test loss: {:.3f}".format(np.mean(test_losses)))
print("Test accuracy: {:.3f}%".format(test_acc*100))
[Out]: Test loss: 0.161
Test accuracy: 93.906%
We managed to achieve an accuracy of 93.8% with this simple LSTM model! This shows the effectiveness of LSTM in handling such sequential tasks.
This result was achieved with just a few simple layers and without any hyperparameter tuning. There are so many other improvements that can be made to increase the model’s effectiveness, and you are free to attempt to beat this accuracy by implementing these improvements!
Some improvement suggestions are as follow:
• Running a hyperparameter search to optimize your configurations. A guide to the techniques can be found here
• Increasing the model complexity
• E.g. Adding more layers/ using bidirectional LSTMsUsing pre-trained word embeddings such as GloVe embeddings
## Beyond LSTMs
For many years, LSTMs has been state-of-the-art when it comes to NLP tasks. However, recent advancements in Attention-based models and Transformers have produced even better results. With the release of pre-trained transformer models such as Google’s BERT and OpenAI’s GPT, the use of LSTM has been declining. Nevertheless, understanding the concepts behind RNNs and LSTMs is definitely still useful, and who knows, maybe one day the LSTM will make its comeback?
### Moving Forward
This comes to the end of this article regarding LSTMs. In this article, we covered the gating mechanisms of the LSTM and how it can retain long-term dependencies. We also did an implementation of the LSTM model on the Amazon Reviews dataset for Sentiment Analysis.
If you are interested in understanding more advanced NLP techniques, you can refer to the following articles article on How to code The Transformer or How to Build OpenAI’s GPT-2. Alternatively, this article also proves a broad view of the current state of NLP developments are what you can look forward to in terms of emerging technologies.
Happy learning!
Special thanks to Alessio for his valuable feedback and advice through this article and the rest of the FloydHub team for providing this amazing platform and allowing me to give back to the deep learning community. Stay awesome!
#### FloydHub Call for AI writers
Want to write amazing articles like Gabriel and play your role in the long road to Artificial General Intelligence? We are looking for passionate writers, to build the world's best blog for practical applications of groundbreaking A.I. techniques. FloydHub has a large reach within the AI community and with your help, we can inspire the next wave of AI. Apply now and join the crew! | 2019-12-13 21:59:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3542695641517639, "perplexity": 1412.393669538318}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00082.warc.gz"} |
https://gmatclub.com/forum/calling-all-fall-2010-dartmouth-tuck-applicants-82393-40.html | Calling All Fall 2010 Dartmouth (Tuck) Applicants : The B-School Application - Page 3
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 23 Feb 2017, 09:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Calling All Fall 2010 Dartmouth (Tuck) Applicants
Author Message
TAGS:
### Hide Tags
Intern
Joined: 31 Jan 2009
Posts: 17
Schools: tuck, haas
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
29 Sep 2009, 22:03
i'm in for EA.
just got back from hanover with an interview/visit. i really enjoyed my time there. the facilities are nice. i went on a monday and was surprised by the number of visitors/interviewers. i'd guess that there were 30+. though numbers might drop off by a lot tues~thurs since they are not next to the weekend.
the student body seems a little bit older than at other schools and quite a few people are married.
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
30 Sep 2009, 18:41
gboi wrote:
i'm in for EA.
just got back from hanover with an interview/visit. i really enjoyed my time there. the facilities are nice. i went on a monday and was surprised by the number of visitors/interviewers. i'd guess that there were 30+. though numbers might drop off by a lot tues~thurs since they are not next to the weekend.
the student body seems a little bit older than at other schools and quite a few people are married.
I've heard from some people who visited, that the "being in the middle of nowhere" feeling at Tuck bothered them. What's your take on it?
thanks
Manager
Joined: 16 Jan 2009
Posts: 88
Followers: 1
Kudos [?]: 1 [0], given: 1
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
05 Oct 2009, 00:47
wiseguyMBA wrote:
gboi wrote:
i'm in for EA.
just got back from hanover with an interview/visit. i really enjoyed my time there. the facilities are nice. i went on a monday and was surprised by the number of visitors/interviewers. i'd guess that there were 30+. though numbers might drop off by a lot tues~thurs since they are not next to the weekend.
the student body seems a little bit older than at other schools and quite a few people are married.
I've heard from some people who visited, that the "being in the middle of nowhere" feeling at Tuck bothered them. What's your take on it?
thanks
Yes, good question. Does being in a big city, near a lot of prospective employers really help?
VP
Joined: 28 Feb 2008
Posts: 1323
Schools: Tuck
Followers: 6
Kudos [?]: 127 [1] , given: 6
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
05 Oct 2009, 03:49
1
KUDOS
wiseguyMBA wrote:
I've heard from some people who visited, that the "being in the middle of nowhere" feeling at Tuck bothered them. What's your take on it?
thanks
It all depends on your perspective.
Tuck is not really all that far from a big city. In less than 2 h you can be in Boston. When I visited Cornell, I really felt "in the middle of nowhere". The closest "city" to Cornell is Syracuse, NY and that's almost a 2 h drive away.
There are quite a few people from NYC who are attending Tuck and from the discussions I've had with them, they seem to be adjusting just fine. At least one of them specifically chose Tuck because it was so different from NYC. I mean, once you get your MBA, you'll be living in a big city for the rest of your life. It might be the last time you get to live in a small college town in a beautiful part of New England.
As for employers, it all depends on what industry and location you are targeting. If you're looking to focus on finance, NYU will have way more options, just based on the fact you're in NYC. I'm targeting health care and Tuck has an impressive list of companies who recruit on campus. Comparable with Kellogg or Duke.
The other nice thing is because Tuck is so close to Boston, a lot of companies who visit MIT or Harvard also make the drive to Tuck since it's not really out of the way.
RF
_________________
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
05 Oct 2009, 10:51
Guys
I am in sort of a dilemma; my project which was supposed to go into production last Friday has extended over by a week, cos of a major bug they found.
I can't make it to Tuck in the next one week, before the EA deadline.
Though I did take 2 days off tomorrow and day after, it would be a huge risk at my job to be out of the action for 2 days during a crucial stage.
I have 3 options:
a. suck it up and just go visit Hanover, risking some raised eyebrows at work.
b. Cancel the interview and apply anyway, without interview.
c. Apply in R1.
What do you guys think I should do? Does anyone know of people who got in without an interview? Should I call Adcom and explain this to them?
I know eventually it will be a personal decision but just trying to gather form here if the interview is a dealbreaker
WG
VP
Joined: 28 Feb 2008
Posts: 1323
Schools: Tuck
Followers: 6
Kudos [?]: 127 [0], given: 6
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
05 Oct 2009, 11:48
wiseguyMBA wrote:
Guys
I am in sort of a dilemma; my project which was supposed to go into production last Friday has extended over by a week, cos of a major bug they found.
I can't make it to Tuck in the next one week, before the EA deadline.
RF
_________________
Current Student
Joined: 04 Jul 2007
Posts: 101
Location: American working in Europe
Schools: HBS R2, Stanford R2, Wharton R2, Columbia RD, Tuck R2, Darden R2, Texas R2
Followers: 6
Kudos [?]: 24 [0], given: 25
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
05 Oct 2009, 15:46
goose wrote:
Where are all the Tuck applicants??? Has anyone visited or interviewed yet?
I'm in for Round 2 - applying via Consortium.
I visited the campus this past July - the support and help from the staff was nothing short of stellar. The facilities are great (if a bit hidden from the eye) and even in the summer, you get a real feeling that it is a tight crew. Nothing but good things to say about Tuck.
I'm a military applicant, and was interviewed by the enrollment and recruiting manager back in July.
Intern
Joined: 07 Oct 2009
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
07 Oct 2009, 17:22
I'm wondering how the interview process changes, if at all, for reapplicants?
Are there any reapplicants here who have already completed their interview?
Intern
Joined: 23 Sep 2009
Posts: 13
Location: India
Schools: Tuck, Emory, Kelly, TAMU, Fisher, Simon, Tippie, NUS, Carey.
Followers: 0
Kudos [?]: 2 [0], given: 9
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
07 Oct 2009, 20:13
refurb wrote:
wiseguyMBA wrote:
Guys
I am in sort of a dilemma; my project which was supposed to go into production last Friday has extended over by a week, cos of a major bug they found.
I can't make it to Tuck in the next one week, before the EA deadline.
RF
Hi I am an international applicant. So the best I can do is to research about Tuck through the website and such forums. Also, I plan to attend the Tuck reception at Mumbai on 29th Oct. I plan to apply in the Nov deadline. I have been hearing about a lot of applicants visiting Tuck and experiencing the culture. I am sure this would have a great effect on their essays but does an absence of such a visit to campus lower any chances of proving that one is as interested in Tuck as anyone else?
As far as the interviews are concerned, does every applicant have to interview (before final acceptance) at the campus? Is the decision for interview initiated by the adcom or the applicant and do international applicants have an option of a telephonic interview or an interview at their country of residence?
simplygood.
_________________
You don't get what you deserve. You deserve what you get.
Intern
Joined: 16 Aug 2009
Posts: 27
Schools: Kellogg (R2 Admit), Tuck (R1 Ding), Booth (R2 Ding), Wharton (R2 Interviewed) , HBS (R2), Stanford (R2), Yale SOM (R3 withdrew), Tepper (R3 withdrew)
WE 1: 5.5 yrs engineering
Followers: 0
Kudos [?]: 3 [0], given: 4
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
07 Oct 2009, 22:07
Hi, I have a question regarding the 4 admission rounds at Tuck. Most top MBA programs have 3 rounds, with the first 2 being equally competitive and the third being virtually impossible for most applicants. How does the difficulty change from round to round at Tuck? Is it roughly the same for the first 3 rounds, or does it get REALLY tough after the November round? I even heard a rumor (not on this forum) that EA is actually tougher than the November round....Any truth to that? I suppose it is possible (for any top program, not just Tuck) that an early round could have a lower admission rate due to a greater percentage of the applicants rushing their essays just to meet the earlier deadline
VP
Joined: 28 Feb 2008
Posts: 1323
Schools: Tuck
Followers: 6
Kudos [?]: 127 [0], given: 6
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
08 Oct 2009, 01:55
simplygood wrote:
Hi I am an international applicant. So the best I can do is to research about Tuck through the website and such forums. Also, I plan to attend the Tuck reception at Mumbai on 29th Oct. I plan to apply in the Nov deadline.
International applicants are not expected to visit campus.
It's good that you are going to the Tuck Reception in Mumbai. My suggestion is to make sure you talk with a couple of the Tuck people when you are there.
RF
_________________
VP
Joined: 28 Feb 2008
Posts: 1323
Schools: Tuck
Followers: 6
Kudos [?]: 127 [0], given: 6
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
08 Oct 2009, 01:58
machg400 wrote:
Hi, I have a question regarding the 4 admission rounds at Tuck.
The first three rounds are pretty much equal in terms of how hard it is to get accepted, the final round is very difficult.
As for how hard it is to get accepted in the EA round, from last year's thread, it appears that not many people apply EA (I think it was under 200 last year), so your chances might be slightly better than rounds 1 and 2, but I don't think it makes much difference either way.
RF
_________________
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
08 Oct 2009, 07:43
refurb wrote:
machg400 wrote:
Hi, I have a question regarding the 4 admission rounds at Tuck.
The first three rounds are pretty much equal in terms of how hard it is to get accepted, the final round is very difficult.
As for how hard it is to get accepted in the EA round, from last year's thread, it appears that not many people apply EA (I think it was under 200 last year), so your chances might be slightly better than rounds 1 and 2, but I don't think it makes much difference either way.
RF
Just visited Tuck yesterday, and brought this up in the Q&A. the EA is better than R1, and that is because people who are only ABSOLUTELY SURE about Tuck apply in EA, whereas people that apply in R1 are not fully sure about Tuck.
That said, don't apply to tuck EA just because chances are better..if you're accepted, you will have to pay $3500 by Jan 13, 2010, by which date you probably wont even hear from your R2 schools. Current Student Joined: 16 Aug 2009 Posts: 344 Schools: Columbia Business School - Class of 2012 Followers: 3 Kudos [?]: 37 [0], given: 22 Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink] ### Show Tags 08 Oct 2009, 08:03 wiseguyMBA wrote: refurb wrote: machg400 wrote: Hi, I have a question regarding the 4 admission rounds at Tuck. The first three rounds are pretty much equal in terms of how hard it is to get accepted, the final round is very difficult. As for how hard it is to get accepted in the EA round, from last year's thread, it appears that not many people apply EA (I think it was under 200 last year), so your chances might be slightly better than rounds 1 and 2, but I don't think it makes much difference either way. RF Just visited Tuck yesterday, and brought this up in the Q&A. the EA is better than R1, and that is because people who are only ABSOLUTELY SURE about Tuck apply in EA, whereas people that apply in R1 are not fully sure about Tuck. That said, don't apply to tuck EA just because chances are better..if you're accepted, you will have to pay$3500 by Jan 13, 2010, by which date you probably wont even hear from your R2 schools.
Well, in all of the past years, Harvard and Stanford would release their decisions after the Tuck EA deposit deadline. This is not going to be the case this year. I'm really wondering if that will change the dynamics of how this works.
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
10 Oct 2009, 09:18
Has anyone else's recommenders mentioned the ridiculous reco form from Tuck, which is taking them 10 hours to complete? This is kinda lame...I mean the only time my recommenders get to do this is on the weekend and it's taking up too much of their time.
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [1] , given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
12 Oct 2009, 16:03
1
KUDOS
Interview report from last week:
First visit to Tuck, very impressed by the scenic location.
Interviewer was 2nd year, was into consulting. Questions:
1. Walk me through resume.
2. I mentioned an interesting fact in my resume...so she dwelled on that asked me if I considered that the most significant accomplishment in my life. I said yes.
3. What inspired me to achieve #2?
4. Experience working in a team of diverse individuals.
5. Why MBA
6. Why Tuck?
7. what I'll contribute to Tuck? I didnt really mention a list of clubs..I said I'll draw from my past experiences and contribute in areas I'm strong at; community impact, study groups, also want to be a member of Adcom.
8. I mentioned being able to deal with conflict so she asked me an instance where I did that.
9. Any questions for her?
The interview lasted 35 minutes, pretty short. I felt good about it, was very conversational in nature. I saw a lot of people that day though...about 25 people interviewed while I was there, and I left at 1-30pm so maybe 35-40 by end of day?
I'll press the submit button tomorrow and then it's left to the Adcom Gods.
SVP
Status: Burning mid-night oil....daily
Joined: 07 Nov 2008
Posts: 2400
Schools: Yale SOM 2011 Alum, Kellogg, Booth, Tuck
WE 1: IB - Restructuring & Distressed M&A
Followers: 78
Kudos [?]: 737 [0], given: 548
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
12 Oct 2009, 16:37
ROLL CALL PG UPDATED
Anyone wants to take over for updating Tuck Roll Call PG?
_________________
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
12 Oct 2009, 20:37
nink wrote:
ROLL CALL PG UPDATED
Anyone wants to take over for updating Tuck Roll Call PG?
I can do that.
regards
WG
Manager
Joined: 23 Sep 2009
Posts: 199
Followers: 1
Kudos [?]: 6 [0], given: 2
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
13 Oct 2009, 13:47
hi people.
I have the transcript of the Admissions chat from this morning. Too long to post here.
Please pm me if you need it.
regards
WG
Intern
Joined: 31 Jan 2009
Posts: 17
Schools: tuck, haas
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink]
### Show Tags
13 Oct 2009, 20:45
anyone not able to get into today's chat? i wasn't able to get into the first chat ~2 weeks ago because it filled up too quickly (70 participants max?)
so i was the first to log in this time! the chat was wayyy too crazy. people asking questions multiple times to get answers. it was very difficult to follow. i stuck to easy questions for the current students and those were answered.
as for hanover, i think i'd like it there. that's just me though. i've lived in the suburbs as well as a couple of big name cities, but it's all personal choice.
Re: Calling All Fall 2010 Dartmouth (Tuck) Applicants [#permalink] 13 Oct 2009, 20:45
Go to page Previous 1 2 3 4 5 6 7 8 9 10 11 ... 32 Next [ 638 posts ]
Similar topics Replies Last post
Similar
Topics:
187 Calling All Fall 2011 Dartmouth (Tuck) Applicants 2121 22 Jun 2010, 08:49
33 Tuck (Dartmouth) 2013 - Calling All Waitlisted Applicants 179 25 Dec 2012, 05:51
364 Tuck 2013 (Dartmouth) - Calling All Applicants 1187 26 Jun 2012, 10:27
1 Calling All Fall 2011 Dartmouth Tuck WAITLISTERS 113 04 Feb 2011, 19:01
43 Calling all Berkeley-Haas Fall 2010 Applicants 1236 13 May 2009, 00:23
Display posts from previous: Sort by | 2017-02-23 17:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27553504705429077, "perplexity": 5634.368068370837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00324-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://proxieslive.com/finding-most-efficient-sorting-algorithm/ | # Finding most efficient sorting algorithm
Arr is an array that contains $$n$$ numbers.
Suggest the most efficient algorithm for each case and analyze the runtime.
Explain why the algorithm you chose is the best one.
1. Arr contains exactly $$\frac{n}{5}$$ distinct values.
2. Arr contains integers in the range $$[0, … , 𝑛^7 − 1]$$.
3. There are exactly (𝑛 − √𝑛) integers in Arr, which are between 1 to 100. The remaining √𝑛 elements are not integers.
I was trying to look at some of the sorting algorithms and try to figure one by one which one is the best, but I believe there’s a better way to do it. What would be the right approach? | 2020-11-26 18:36:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6564417481422424, "perplexity": 571.5086206517229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00503.warc.gz"} |
http://www.citizendia.org/Universal_instantiation | In logic universal instantiation (UI, sometimes confused with Dictum de omni) is an inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. Logic is the study of the principles of valid demonstration and Inference. In Aristotelean logic, dictum de omni et nullo ( the maxim of all and none) is the principle that whatever is affirmed or denied of a whole kind K may be affirmed or Inference is the act or process of deriving a Conclusion based solely on what one already knows It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom. In Predicate logic, universal quantification is an attempt to formalize the notion that something (a Logical predicate) is true for everything, or every It is one of the basic principles used in quantification theory. First-order logic (FOL is a formal Deductive system used in mathematics philosophy linguistics and computer science
Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal. "
In symbols the rule as an axiom schema is
$\forall x \, A(x) \Rightarrow A(a/x),$
for some term a and where A(a / x) is the result of substituting a for all free occurrences of x in A. In Mathematical logic, an axiom schema generalizes the notion of Axiom.
And as a rule of inference it is
from ⊢ ∀x A infer ⊢ A(a/x),
with A(a/x) the same as above. In Logic, a rule of inference (also called a transformation rule) is a function from sets of formulae to formulae
Irving Copi noted that universal instantiation ". Irving Marmer Copi (born Copilovich, July 28 1917, Duluth, Minnesota &ndash August 19 2002, Honolulu . . follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanislaw Jaskowski in 1934. In Philosophical logic, natural deduction is an approach to Proof theory that attempts to provide a Deductive system which is a formal model of logical Gerhard Karl Erich Gentzen ( November 24, 1909, Greifswald, Germany &ndash August 4, 1945, Prague, Czechoslovakia Stanisław Jaśkowski ( April 22, 1906 &ndash November 16, 1965) was a Polish Logician who made important contributions " -pg. 71. Symbolic Logic; 5th ed.
© 2009 citizendia.org; parts available under the terms of GNU Free Documentation License, from http://en.wikipedia.org
network: | | | 2013-05-19 07:21:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6715096831321716, "perplexity": 1203.2348755275652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384213/warc/CC-MAIN-20130516092624-00044-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://drtamararnon.com/chemistry-density-problems/ | • Balance this compound tendencies: Hint
• 29.A few g/cm 3
• 1.15 g/mL
• 1.16 g/mL
Science Trip Typical Reagents & Buffers ( blank ) Want to know some sort of buffers' huge or even volume level? Why not consider the level of buffer important to diminish a fix? Please click here and then to see a directory of frequent reagents. EngineersEdge Kinetic Electricity Equations ( blank ) Brief as well as simple way to find your kinetic electrical power connected with an object. This page functions Dalton's Legislations to be able to estimate these just a few tension beliefs to suit your needs. Example 8: Some sort of stop with direct features measurements of Five.55 centimeters by simply 5 various.30 centimetres by means of Half-dozen.50 centimetres.
What exactly is it has the density?
Digital Ionic Situation : Are aware of the identify of any chemical however, not the goals made of? Basically say hello to the label which application provides you with the molar bulk and also test blueprint in the molecule. Shodor Redox Reactions * For a more in depth look at oxidation-reduction typical reactions read this wonderful plug plus chug dilemma solver. TutorVista Percentage Arrangement ( space ) Educate yourself on the fraction formula by means of size of each and every look at a chemical solution.
• 2480 g
• Below is actually a model of your pyramid located at a archeological get manufactured from an unidentified substance. It can be too large to uncover the volume level by way of submerging them inside waters. Furthermore, the particular research workers usually take away each to test simply because this chart is a part of historical past. It has the elevation is One hundred and fifty.0m. The length of it has the base is Seventy-five.0m and also the fullness is definitely 60.0m. The actual muscle size of the pyramid is actually Several.50×10 5 various kilo. What is the body?
• If using a finance calculator, you could maintain the entire molar size to some varying for instance "A". This will accelerate estimations, reducing typographical mistakes.
• CCl4 is definitely 95.2% Craigslist and seven.8% C through mass
• CCl4 is definitely Ninety two.2% C-list and 7.8% D through mass
• 1.049 g/mL
This collection of biochemistry and biology hand calculators, divided straight into distinctive simple basics, is a wonderful study associated with opening hormones, but includes quite a few methods pertaining to higher-level projects in these issues seeing that huge numbers as well as sophisticated stoichiometry. A new kg connected with plumage definitely consumes more space, however, this is because it truly is less "dense.In But what can be thickness, and just how could we decide the idea? Individuals could possibly soften copper pennies then sell the photographer for longer than the pence were being value. The particular Day-to-day Worth is founded on some sort of the actual size for each type advisable a day per man or woman for the 2000 food diet. Having a guidebook stability, you will find the particular undiscovered size of the target by way of modifying or maybe assessing well-known people until equilibrium can be achieved.
These measures are generally detailed in the amount below. It is as basic as click on, put, and check! This king's "gold" top, as a result, hasn't been manufactured from absolute gold. Lighting ingredients would probably float in h2o, hence their body should be lower than that from waters.
Colorado Express Equilibria – A more advanced, accurate on line source for tallying equilibria. If you wish to have in mind the percent make up of your things in an element, abide by these steps: and intended for 1 I'm sure that we require coming from M to Milliliter and i also slowly move the decimal to be able to correct Triple. 2) Assess the total number of copper mineral foil:
A Note In relation to Substantial Figures
Glimpse over the same line and focus the actual % created. (Zero.050268 cm) (12 millimeter Per One centimeters) Means 2.503 millimeters (to a few signature figs) In addition to, not any, you shouldn't buy it! Tip: check out the actual solidity involving fool's yellow metal. It really is so simple as click on, plug, and look! and pertaining to 1 I do know that I will need to go out of D to help Milliliter i slowly move the decimal to help proper Three times.
First year, 1943, the any amount of money did not have any copper mineral within it due to the cost of the planet Warfare II. 4.924 h Or 8-10.Ninety six g/cm A few Equals 3.54955 centimetres 3 Just what is the level (throughout cubic cm) on the 563.Thirty five carats Star of India amethyst? Example 1.5 various.1: Healthy Labels The different parts of density will be: huge plus quantity, each is often a lot more puzzling when compared with at first glance. How many skin moles of water www.coloradocollege.edu are produce of Twenty.A pair of h associated with B2H6?
Calculating Body on the Substance
Calcium supplements carbide (CaC2) responds to h2o in order to create calcium hydroxide (Ohio(Wow)2) along with acetylene petrol (C2H2). AJ Style and design Best Petrol Rules Formulas and Equations ( blank ) If you're fixing with regard to density, strain, temperatures or perhaps quantity while using Suitable propane regulation, Just what are large https://bestessay4u.com/ as well as amount? We simply can't realize density right up until young children and can it has the parts: huge in addition to level. Digital Ionic Formula ( blank ) Are aware of the identity of a particle and not how it's produced from? Basically type in the identity this also instrument will provide you with your molar large as well as test formula on the particle. It really is more than merely the level of swimming pool water in your swimming because doing so worries anything from the amount of money in the bank in your health insurance and how we stay.
• 0.274 g/mL
• Find this molar muscle size of all of the factors within the chemical substance inside grams each skin mole.
• If using a loan calculator, you can maintain the all round molar huge to a varied such as "A". This can increase the speed of computations, reducing typographical errors.
• Below can be a model of a pyramid available at the ancient drill down manufactured from an unfamiliar substance. It can be too big to obtain the volume level by submerging them throughout water. As well, the particular people won't eliminate an article to try since this chart is a part of history. It is level will be One hundred fifty.0m. The size of their is made of 70.0m as well as fullness is usually 55.0m. This bulk on this pyramid can be Five.50×10 Five kilograms. What's the denseness?
What's it's body? Exactly what is the quantity (in cubic inches) from the 563.27 size Star of India azure? Simply put in your current stats on the rooms furnished along with your remedies will always even out. When the rope pointed out that he or she would use a similar strategy to ascertain the particular thickness of the title! And it fell supposedly leaped through the streets naked ranting "Eureka,In which means "I discovered it!" around Latin. There tend to site that writes essays for you be bodily attributes of the substance which help know the material. Example 6: Calculate this density associated with sulfuric acid if perhaps 35.Four milliliters with the acid solution weighs about 65.14 gary.
Electrolysis
(789 kg/m Three or more ) (Thousands of f / Just one kilograms) (1 mirielle 3 / Hundred Three or more centimetres Several ) Equates to Zero.789 g/cm 3 Since just one cubic centimeters means 1 cm Three , there isn't a well-designed in between g/cm 3 and g/mL. A density of an gasoline might be handled in a in the future unit, mainly because it is density is quite understanding of temp and pressure. Look along the same line and focus the actual pct composed. 2) Turn g/m 3 to g/cm 3
Extra Tools
Significant Data — Never decrease on account of prolonged hand-written equations. Which 1 weighs a lot more, a kilo with feathers or simply a kilogram with brick? Even though a lot of people is going to say that your kg regarding bricks is definitely more substantial, they actually weigh up the identical! Nevertheless, everybody is caught up by way of the thought of occurrence $$\rho$$, that can cause these phones respond to the issue improperly. The Supposrr que unit intended for body is kg/m Three or more . 26561 grms or perhaps Twenty-six.56 kilograms
6. While meditating about this marvel in a very tub, Archimedes acknowledged that if he or she joined the bathtub, the river increased. Example 4: A rectangular block of copper stainless steel weighs about 1896 g.
Calculating Density of your Substance
Just what exactly muscle size associated with O2 will be was required to shed 35.1 gary with B2H6? Soon after 1857, the federal government started off including various other more cost-effective metals to your combine. The system with regard to cubic cm is actually centimetres 3 or more and then for milliliters is actually milliliter.
where
? Means thickness
m = bulk
V Equals volume | 2020-07-08 02:05:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32367339730262756, "perplexity": 3142.699732459068}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00499.warc.gz"} |
https://edoc.unibas.ch/85917/ | # Supernova neutrino burst detection with the Deep Underground Neutrino Experiment
Preview
PDF - Published Version | 2022-06-28 05:21:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851513266563416, "perplexity": 6418.500411825753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00280.warc.gz"} |
https://en.wikipedia.org/wiki/Comb_filter | # Comb filter
Feedforward comb filter structure
In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches, giving the appearance of a comb.
## Applications
Advanced PAL Comb Filter-II (APCF-II, Motorola MC141627FT)
Comb filters are employed in a variety of signal processing applications, including:
In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio.[1] In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener.[2]
## Implementation
Comb filters exist in two forms, feedforward and feedback; which refer to the direction in which signals are delayed before they are added to the input.
Comb filters may be implemented in discrete time or continuous time forms which are very similar.
### Feedforward form
Feedforward comb filter structure
The general structure of a feedforward comb filter is described by the difference equation:
${\displaystyle y[n]=x[n]+\alpha x[n-K]}$
where ${\displaystyle K}$ is the delay length (measured in samples), and α is a scaling factor applied to the delayed signal. The z transform of both sides of the equation yields:
${\displaystyle Y(z)=\left(1+\alpha z^{-K}\right)X(z)}$
The transfer function is defined as:
${\displaystyle H(z)={\frac {Y(z)}{X(z)}}=1+\alpha z^{-K}={\frac {z^{K}+\alpha }{z^{K}}}}$
#### Frequency response
Feedforward magnitude response for various positive values of α and K = 1
Feedforward magnitude response for various negative values of α and K = 1
The frequency response of a discrete-time system expressed in the z-domain, is obtained by substitution z = e. Therefore, for the feedforward comb filter:
${\displaystyle H\left(e^{j\Omega }\right)=1+\alpha e^{-j\Omega K}}$
Using Euler's formula, the frequency response is also given by
${\displaystyle H\left(e^{j\Omega }\right)={\bigl [}1+\alpha \cos(\Omega K){\bigr ]}-j\alpha \sin(\Omega K)}$
Often of interest is the magnitude response, which ignores phase. This is defined as:
${\displaystyle \left|H\left(e^{j\Omega }\right)\right|={\sqrt {\Re \left\{H\left(e^{j\Omega }\right)\right\}^{2}+\Im \left\{H\left(e^{j\Omega }\right)\right\}^{2}}}}$
In the case of the feedforward comb filter, this is:
${\displaystyle \left|H\left(e^{j\Omega }\right)\right|={\sqrt {\left(1+\alpha ^{2}\right)+2\alpha \cos(\Omega K)}}}$
The (1 + α2) term is constant, whereas the 2α cos(ΩK) term varies periodically. Hence the magnitude response of the comb filter is periodic.
The graphs show the magnitude response for various values of α, demonstrating this periodicity. Some important properties:
• The response periodically drops to a local minimum (sometimes known as a notch), and periodically rises to a local maximum (sometimes known as a peak).
• For positive values of α, the first minimum occurs at half the delay period and repeat at even multiples of the delay frequency thereafter:
${\displaystyle f={\frac {1}{2K}},{\frac {3}{2K}},{\frac {5}{2K}}\cdots }$.
• The levels of the maxima and minima are always equidistant from 1.
• When α = ±1, the minima have zero amplitude. In this case, the minima are sometimes known as nulls.
• The maxima for positive values of α coincide with the minima for negative values of ${\displaystyle \alpha }$, and vice versa.
#### Impulse response
The feedforward comb filter is one of the simplest finite impulse response filters.[3] Its response is simply the initial impulse with a second impulse after the delay.
#### Pole–zero interpretation
Looking again at the z-domain transfer function of the feedforward comb filter:
${\displaystyle H(z)={\frac {z^{K}+\alpha }{z^{K}}}}$
the numerator is equal to zero whenever zK = −α. This has K solutions, equally spaced around a circle in the complex plane; these are the zeros of the transfer function. The denominator is zero at zK = 0, giving K poles at z = 0. This leads to a pole–zero plot like the ones shown.
Pole–zero plot of feedforward comb filter with K = 8 and α = 0.5 Pole–zero plot of feedforward comb filter with K = 8 and α = −0.5
### Feedback form
Feedback comb filter structure
Similarly, the general structure of a feedback comb filter is described by the difference equation:
${\displaystyle y[n]=x[n]+\alpha y[n-K]}$
This equation can be rearranged so that all terms in ${\displaystyle y}$ are on the left-hand side, and then taking the z transform:
${\displaystyle \left(1-\alpha z^{-K}\right)Y(z)=X(z)}$
The transfer function is therefore:
${\displaystyle H(z)={\frac {Y(z)}{X(z)}}={\frac {1}{1-\alpha z^{-K}}}={\frac {z^{K}}{z^{K}-\alpha }}}$
#### Frequency response
Feedback magnitude response for various positive values of α and K = 2
Feedback magnitude response for various negative values of α and K = 2
Substituting z = e into the z-domain expression for the feedback comb filter:
${\displaystyle H\left(e^{j\Omega }\right)={\frac {1}{1-\alpha e^{-j\Omega K}}}}$
The magnitude response is as follows:
${\displaystyle \left|H\left(e^{j\Omega }\right)\right|={\frac {1}{\sqrt {\left(1+\alpha ^{2}\right)-2\alpha \cos(\Omega K)}}}}$
Again, the response is periodic, as the graphs demonstrate. The feedback comb filter has some properties in common with the feedforward form:
• The response periodically drops to a local minimum and rises to a local maximum.
• The maxima for positive values of α coincide with the minima for negative values of ${\displaystyle \alpha }$, and vice versa.
• For positive values of α, the first maximum occurs at 0 and repeats at even multiples of the delay frequency thereafter:
${\displaystyle f=0,{\frac {1}{K}},{\frac {2}{K}},{\frac {3}{K}}\cdots }$.
However, there are also some important differences because the magnitude response has a term in the denominator:
• The levels of the maxima and minima are no longer equidistant from 1. The maxima have an amplitude of 1/1 − α.
• The filter is only stable if |α| is strictly less than 1. As can be seen from the graphs, as |α| increases, the amplitude of the maxima rises increasingly rapidly.
#### Impulse response
The feedback comb filter is a simple type of infinite impulse response filter.[4] If stable, the response simply consists of a repeating series of impulses decreasing in amplitude over time.
#### Pole–zero interpretation
Looking again at the z-domain transfer function of the feedback comb filter:
${\displaystyle H(z)={\frac {z^{K}}{z^{K}-\alpha }}}$
This time, the numerator is zero at zK = 0, giving K zeros at z = 0. The denominator is equal to zero whenever zK = α. This has K solutions, equally spaced around a circle in the complex plane; these are the poles of the transfer function. This leads to a pole–zero plot like the ones shown below.
Pole–zero plot of feedback comb filter with K = 8 and α = 0.5 Pole–zero plot of feedback comb filter with K = 8 and α = −0.5
### Continuous-time comb filters
Comb filters may also be implemented in continuous time. The feedforward form may be described by the equation:
${\displaystyle y(t)=x(t)+\alpha x(t-\tau )}$
where τ is the delay (measured in seconds). This has the following transfer function:
${\displaystyle H(s)=1+\alpha e^{-s\tau }}$
The feedforward form consists of an infinite number of zeros spaced along the jω axis.
The feedback form has the equation:
${\displaystyle y(t)=x(t)+\alpha y(t-\tau )}$
and the following transfer function:
${\displaystyle H(s)={\frac {1}{1-\alpha e^{-s\tau }}}}$
The feedback form consists of an infinite number of poles spaced along the jω axis.
Continuous-time implementations share all the properties of the respective discrete-time implementations. | 2021-07-29 20:07:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219486474990845, "perplexity": 890.7623917141024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00590.warc.gz"} |
http://www.linuxine.com/story/iis-75-not-compressing-javascript-nomatchingcontenttype | # IIS 7.5 Not Compressing Javascript: NO_MATCHING_CONTENT_TYPE
view full story
http://serverfault.com – I have IIS 7.5 set to compress all static files (the default), yet it does not compress .js (javascript) files. When I turn on failed request tracing for compression, the error I get for the compression is: NO_MATCHING_CONTENT_TYPE I read about this, and the only solution I saw posted was to make sure application/x-javascript (and not just application/javascript) is specified as a mimetype seen as 'static' content. So I adjusted my applicationHost.config to have this: <httpCompression> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" dynamicCompressionLevel= (HowTos) | 2014-12-26 03:19:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140996336936951, "perplexity": 8641.10697316287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548601.48/warc/CC-MAIN-20141224185908-00099-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/198168/alignment-inside-the-align-enviroment-horizontally-and-vertically | Alignment inside the \align enviroment-horizontally and vertically
Is it possible to make both expressions for omega's aligned with one another and do the same for the coefficients a,b,r and d,c,t? I tried to align them manually by using center but it does not work.
\usepackage[pdftex]{graphicx}
\usepackage[T1]{fontenc}
\usepackage{fouriernc}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\begin{document}
\begin{align*}
\omega_{1}^2&=1-2\sqrt[3]{19}+\sqrt[3]{19^2}\\
\omega_{1}^{2}&=b\omega_{1}+a\omega_{2}+r
\end{align*}
\begin{center}
$a=3,\hspace{1cm}b=-3,\hspace{1cm}r=-3$
\end{center}
similarly
\begin{align*}
\omega_{2}^2&=\frac{1}{3}(13+7\sqrt[3]{19}+\sqrt[3]{19})\\
\omega_{2}^{2}&=d\omega_{1}+c\omega_{2}+t
\end{align*}
\begin{center}
$d=2,\hspace{1cm}c=1,\hspace{1cm}t=6$
\end{center}
\end{document}
I get something like this
Is it possible to use align within align environment? Thank you
-
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{fouriernc}
\usepackage{mathtools}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\begin{document}
\begin{align*}
\omega_{1}^2&=1-2\sqrt[3]{19}+\sqrt[3]{19^2}\\
\omega_{1}^{2}&=b\omega_{1}+a\omega_{2}+r\\
\shortintertext{similarly}
\omega_{2}^2&=\frac{1}{3}(13+7\sqrt[3]{19}+\sqrt[3]{19})\\
\omega_{2}^{2}&=d\omega_{1}+c\omega_{2}+t\\
d=2, & \qquad c=1, \text{\phantom{$-$}}\qquad t=6
\end{align*}
\end{document}
-
Thank you. Your solution looks better. Is it possible to push the third line in each system a little bit to the right? – user124471 Aug 27 '14 at 15:20
yes. as you please you can shift the line either center or right or keep it left adjusted. It will not look nice if it is centered. – murugan Aug 27 '14 at 16:36
@user124471 Pushed now. Please see the updated answer. – Harish Kumar Aug 28 '14 at 0:00
Great. Thank you. – user124471 Aug 28 '14 at 9:39
The code could be written as shown below
\documentclass{article}
\usepackage[pdftex]{graphicx}
\usepackage[T1]{fontenc}
\usepackage{fouriernc}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\begin{document}
\begin{align*}
\omega_{1}^2&=1-2\sqrt[3]{19}+\sqrt[3]{19^2}\\
\omega_{1}^{2}&=b\omega_{1}+a\omega_{2}+r
\end{align*}
\flushright
Where
$a=3,\hspace{0.8cm}b=-3,\hspace{0.6cm}r=-3$\\
\vspace{0.3cm}
\flushleft
Similarly
\begin{align*}
\omega_{2}^2&=\frac{1}{3}(13+7\sqrt[3]{19}+\sqrt[3]{19})\\
\omega_{2}^{2}&=d\omega_{1}+c\omega_{2}+t
\end{align*}
\flushright
Where
$d=2,\hspace{1cm}c=1,\hspace{0.9cm}t=6$
\end{document}
OUTPUT:
-
Thank you, also a nice solution. – user124471 Aug 28 '14 at 9:36 | 2015-09-04 01:35:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872786283493042, "perplexity": 2771.9322470751017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00155-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2532629/eigenvectors-of-a-rotational-matrix | # Eigenvectors of a “rotational” matrix [closed]
Let's say a matrix is rotational if it does not change under rotation by 90 degrees. For example, $\begin{pmatrix} 1&0&1\\ 0&2&0\\ 1&0&1 \end{pmatrix}$ is rotational. Show that any eigenvectors $v=(v_1,...,v_n)$ corresponding real eigenvalues of any real rotational matrix has the following property: $v_i=v_{n-i+1}$.
## closed as off-topic by Travis, kingW3, Peter Franek, Arnaud D., Maria MazurNov 23 '17 at 10:29
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Travis, kingW3, Peter Franek, Arnaud D., Maria Mazur
If this question can be reworded to fit the rules in the help center, please edit the question.
• You should expose what you tried. Is this an exercise from your professor/book or from your mind? – Von Neumann Nov 22 '17 at 17:08
• It is not a homework. It was from an entry exam to a grad school. Trying to prepare. – magzhan Nov 22 '17 at 17:26
• I don't understand the question. – copper.hat Nov 22 '17 at 17:43
• Your titre should be "Eigenvectors...." – Jean Marie Nov 23 '17 at 7:22
• Thanks, fixed it.. – magzhan Nov 23 '17 at 7:28
Before beginning, in order to make things clearer for all those who haven't a clear idea of what means invariance by rotation for a matrix, here is the general case for a $4 \times 4$ matrix :
$$A=\begin{pmatrix} a & b & c & a \\ c & d & d & b \\ b & d & d & c \\ a & c & b & a \end{pmatrix}.$$
A first capital remark. Property
$$\tag{*}v_i=v_{n-i+1}$$
you want to establish is only one of the two cases that can occur.
The other case is
$$\tag{**}v_i=-v_{n-i+1}$$
Let us show it on the example of
$$A=\begin{pmatrix} -1 & 1 & 1 & -1\\ \ \ 1 & 1 & 1 & \ \ 1\\ \ \ 1 & 1 & 1 & \ \ 1\\ -1 & 1 & 1 & -1 \end{pmatrix}$$
It can be diagonalized under the form: $A=P D P^{-1}$ where $D=diag(-2 \sqrt{2},0,0, 2 \sqrt{2})$ and
$$P=\begin{pmatrix} 0.6533 & 0.6344 & 0.3123 & -0.2706\\ -0.2706 & -0.3123 & 0.6344 & -0.6533\\ -0.2706 & 0.3123 & -0.6344 & -0.6533\\ 0.6533 & -0.6344 & -0.3123 & -0.2706 \end{pmatrix}$$
The columns $P_1,P_2,P_3,P_4$ of $P$ are eigenvectors associated with the given eigenvalues in the same order. We see that the structure of the last column vector $P_4$ verifies (*), but this is not the case for the columns $P_1, P_2, P_3$ that verify the other property (**).
Now for the proof. Let us establish that we are in one of the two cases (*) or (**).
Let $J$ be the $n \times n$ matrix with ones on the second diagonal and zeros elsewhere.
The rotational matrices you consider are a subset (in fact a vector subspace) of the so-called centrosymmetric matrices (see Wikipedia article) which have in particular the following property:
$$\tag{0}AJ=JA$$
Let us assume that $V$ is an eigenvector of $A$ associated with eigenvalue $\lambda$, i.e.
$$\tag{1}AV=\lambda V.$$
Besides, the property we want to establish, grouping (*) and (**) can be written under the form:
$$\tag{2}JV = V \ \ \text{or} \ \ JV=-V$$
In other words, we want to prove that (1) $\implies$ (2).
Let us left-multiply (1) by $J$, giving $(JA)V=\lambda JV$.
Using relationship (0), this can be written under the form:
$$\tag{3}(AJ)V=\lambda JV$$
which can be read in the following way: $A(JV)=\lambda (JV)$ meaning that $JV$ is an eigenvector associated with the same eigenvalue $\lambda$ as $V$. If the eigenspace associated with $\lambda$ is one-dimensional, then $JV=kV$ for a certain $k$. But, $J$ being an isometry, we have $\|JV\|=\|V\|$. An immediate consequence is that $k=\pm 1$, which is nothing else than property (2) we wanted to establish.
Remark: note the restriction ; we have established the property conditionnaly on the fact that the eigenspace associated with $\lambda$ is 1D....
Some references on centrosymmetric matrices:
An interesting question and answer in (Eigenvalues of centrosymmetric matrix).
An interesting answer in (Eigenvalues of a rotationally symmetric matrix).
A wide view in (http://www.math.ualberta.ca/ami/CAMQ/pdf_files/vol_10/10_4/10_4a.pdf)
• A courageous anonymous downvoting ... surely not on scientific grounds because my answer 1) fixes a bug in the question 2) proves the question 3) makes the connection with centrosymmetric matrices. Personal hatred ? – Jean Marie Nov 22 '17 at 22:49
• Thank you very much for the answer and the links. Don't worry on such things :) – magzhan Nov 23 '17 at 7:26
• How do we establish the fact that the eigenspace is 1D? – magzhan Nov 23 '17 at 8:14
• One must look at the properties of the characteristic polynomial of these matrices. The case of double roots looks very unlikely unless we are in very particular cases (where the double root is in fact $0$), but I have no proof at the moment. – Jean Marie Nov 23 '17 at 10:07 | 2019-05-26 02:52:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086017966270447, "perplexity": 451.3699211209985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00431.warc.gz"} |
https://www.yourdictionary.com/pdt | #### Sentence Examples
• Buti o Pdt = P is the normal pressure, and as we only wish to find the excess we may leave this out of account.
• DE, namely, dE =dP+(s' - s")dT = (P/T)dT = pdT = (p" - p')dT, in which the coefficient, P, of the Peltier effect, and the thermoelectric power, p, of the couple, may be expressed in terms of the difference of the thermoelectric powers, p and p", of the separate metals with respect to a neutral standard.
• 84.177.144.104 11:40, 24 Oct 2006 (PDT)
• John Kenney 10:33, 6 Aug 2006 (PDT)
• John Kenney 08:54, 6 Aug 2006 (PDT) | 2018-12-15 06:27:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876816630363464, "perplexity": 6328.113008495431}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826800.31/warc/CC-MAIN-20181215061532-20181215083532-00148.warc.gz"} |
https://shelah.logic.at/papers/1092/ | # Sh:1092
• Baldwin, J. T., & Shelah, S. Hanf numbers for extendibility and related phenomena. Preprint.
• Abstract:
In this paper we discuss two theorems whose proofs depend on extensions of the Fraissé method. We prove the Hanf number for the existence of an extendible model (has a proper extension in the class. Here, this means an \infty,\omega-elementary extension) of a (complete) sentence of L_{\omega_1, \omega} is (modulo some mild set theoretic hypotheses that we expect to remove in a later paper) the first measurable cardinal. And we outline the description on an explicit L_{\omega_1, \omega}-sentence \phi_n characterizing \aleph_n for each n. We provide some context for these developments as outlined in the lectures at IPM. | 2020-09-30 08:00:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159504532814026, "perplexity": 1565.9780495826515}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00352.warc.gz"} |
https://mathoverflow.net/questions/269667/enquiry-on-an-inequality-involving-the-sum-of-the-reciprocals-of-primes | # Enquiry on an inequality involving the sum of the reciprocals of primes
Let $p_k$ denote the $k-th$ prime. Do there exist some constant $A>0$ such that for every sufficiently large $k$,
$$\sum_{p\leq p_k} \frac{1}{p} > B + \log\log p_k + \frac{A}{\log p_k}$$
where $B$ is the Mertens constant?
It is known that
$$\sum_{p\leq p_k} \frac{1}{p} = B + \log\log p_k + O\left(\frac{1}{\log p_k} \right)$$
but nothing seems to be known about the sign of the implicit constant in the Landau $O$-symbol.
• No. Mertens original paper has the opposite inequality with some terms plus 4/log(n+1), which means you can compute n_0 for which A=5 works for n bigger than n_0 for the opposite inequality. Also, there is oscillation (see Diamond and Pintz) which means your inequality with any positive A fails infinitely often. Gerhard "Goes This Way And That" Paseman, 2017.05.13. – Gerhard Paseman May 13 '17 at 14:04
• @GerhardPaseman: why not posting this as an answer? – Seva May 13 '17 at 16:11
• Because I don't know if the question has a typo. If the question is meant as asked, I may post an answer. Gerhard "Isn't Sure Of The Question" Paseman, 2017.05.13. – Gerhard Paseman May 13 '17 at 23:08 | 2019-10-22 17:46:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116339683532715, "perplexity": 620.0473269325502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00242.warc.gz"} |
https://www.fernuni-hagen.de/ti/forschung/publikationen/KSS17.shtml | # Veröffentlichung
Titel:
On Gallai's conjecture for series-parallel graphs and planar 3-trees
AutorInnen:
Philipp Kindermann
Lena Schlipf
André Schulz
Kategorie:
Sonstiges
erschienen in:
CoRR, Vol. abs/1706.04130, 2017
Abstract:
A path cover is a decomposition of the edges of a graph into edge-disjoint simple paths. Gallai conjectured that every connected n-vertex graph has a path cover with at most n/2 paths. We prove Gallai's conjecture for series-parallel graphs. For the class of planar 3-trees we show how to construct a path cover with at most 5n/8 paths, which is an improvement over the best previously known bound of 2n/3. | 2021-10-19 01:56:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441120982170105, "perplexity": 1058.6937213632248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00125.warc.gz"} |
https://stats.stackexchange.com/questions/383546/using-normal-generalized-additive-model-rather-than-zero-inflated-regression | # using normal Generalized additive model rather than zero inflated regression
I am doing regression analysis for my data , nearly half of my data is Zero .
I have conducted Generalized additive model for my data ; but I was wondering if it is enough to do only generalized additive regression analysis for my data ??? | 2019-04-22 00:15:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701769113540649, "perplexity": 627.1414426722098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532948.2/warc/CC-MAIN-20190421235818-20190422021818-00514.warc.gz"} |
https://www.interviewcake.com/question/php/delete-node | ## Get a free weekly practice problem!
Keep that axe sharp.
### You only have free questions left (including this one).
But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.
Delete a node from a singly-linked list, given only a variable pointing to that node.
The input could, for example, be the variable $b below: class LinkedListNode { private$value; private $next = null; public function __construct($value) { $this->value =$value; } public function getNext() { return $this->next; } public function setNext($next) { $this->next =$next; } public function getValue() { return $this->value; } public function setValue($value) { $this->value =$value; } } $a = new LinkedListNode('A');$b = new LinkedListNode('B'); $c = new LinkedListNode('C');$a->setNext($b);$b->setNext($c); deleteNode($b);
We can do this in time and space! But our answer is tricky, and it could have some side effects... | 2019-05-20 03:35:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1886027753353119, "perplexity": 7169.539084587894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255536.6/warc/CC-MAIN-20190520021654-20190520043654-00014.warc.gz"} |
https://www.physicsforums.com/threads/what-flows-on-a-surface-can-be-geodesic-flows.547258/ | # What flows on a surface can be geodesic flows?
1. Nov 4, 2011
### lavinia
The question is what flows on a surface can be geodesic flows. Specifically, starting with a smooth vector field on a surface - perhaps with isolated singularities - when is there a Riemannian metric so that the vector field has constant length and is tangent to geodesics on the surface?
Here is an attempt but much more needs to be done.
If I have not made a mistake in computation then here goes.
If the vector field is length 1 it may be viewed as a map into the tangent unit circle bundle. Under this map, the connection 1 form pulls back to a 1 form on the surface. If e1 is the vector field and e2 is the orthogonal vector field with e1,e2 a positively oriented basis for the tangent space at each point, then -[e1,e2], negative the Lie bracket of e1 and e2, is dual to this 1 form - I think. If e1 is tangent to geodesics, then [e1,e2] is a multiple of e2 as can be seen from direct computation - unless I have made an error.
Write [e1,e2] = se2
Let w be the pull back of the connection 1 form. Then dw = -Kvol where K is the Gauss curvature and vol is the volume element. So
dw(e1,e2) = -K = e1.w(e2) - w([e1,e2]) = e1.<e2,-[e1,e2]> - <-[e1,e2],[e2,e2]>
Note that I omitted 1 term on the right because it is zero by assumption.
= -e1.s - s$^{2}$ so one has the differential equation
ds/dx = K + s$^{2}$ where x is the arc length parameter along the geodesic.
From this one gets information right away. For instance suppose the Gauss curvature is positive. Then s is increasing along each curve. Therefore the geodesic can not be closed - although it may actually close off at the singularities. But if K = 0 and s = 0 the geodesic can be closed as is illustrated in the flat cylinder.
Further if K > 0 so that s is increasing, the geodesics can not be dense in any region (Cauchy sequence argument I think) and any spiral must have finite length.
Interestingly, in the case of constant positive curvature this integrates to the tangent - forgetting the constant K$^{1/2}$ - which diverges in finite time. So one also gets a bound on the length of the geodesics. On the sphere it diverges right at the north and south poles.
So I am asking for ideas on how to push this further.
BTW: s can always be solved for in an open region of any point. So this problem is really asking whether it can be solved for globally on the surface. For instance suppose I have a vector field with a singularity of index -1. Cant it be globally tangent to geodesics?
If this is all wrong - please cut it to pieces.
Last edited: Nov 4, 2011 | 2019-01-17 16:49:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685714364051819, "perplexity": 353.1778924697287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00555.warc.gz"} |
http://rwx.io/ | Welcome to my little corner of the Internet. Here you will find some notes and information from various project ideas and experiments I am working on. I am glad you stopped by, and hope you find what you are looking for.
It would be nice to simply click on a debug message and have it open Emacs to the correct location in the corresponding file. There are a variety of ways to accomplish this, I chose to use terminal hyperlinks along with adding an Emacs URI protocol to my system.
macOS URI Protocol Handler
Strangely over the past week I ran into the need for a URI protocol handler on three different occasions. Instead of looking for three separate existing handlers that might work, I decided to write a single generic handler. The solution is a simple URI protocol router that forwards requests to shell scripts that handle the protocol requests. Below I describe some of the details; you can also find the end result on github: uri-handler.
Org Export Configurations
Emacs org mode offers a variety of export options that make it easy to look at your notes in different formats, or perhaps make them available for others to view. Three I use regularly are markdown, mindmap, and reveal presentation. My approach to Note Taking The best way to learn something is to sumarize the topic in your own words, in your own context, and present it to others with concrete examples. [Read More]
Deft + Org for Notes
In the nvALT and Emacs post I described an integration between nvAlt and Emacs using Deft for markdown notes. I the past year I have moved to using Deft for org notes rather than markdown notes. The nice thing about combining Deft with Org is that your notes are indexed and easy to find using Deft, but also retain all of the power of Org to orgainize and present information. For example typing decision boundary into deft quickly cuts down hundreds of org text notes to the handful that contain the words decision and boundary in them.
Org Mode ES2015+ Code Blocks (updated)
Babel 6x is a significant change from Babel 5x, as the cli is now a separate node module called babel-cli and transforms are now also delivered as separate packages. First make a few changes to the emacs environment so you can use JavaScript in org mode, as well as find local node.js modules you have installed. Replace ~/org/node_modules in the configuration below with the location of any local node modules you want to use. Using this approach you don't have to pollute the global node_module directory if you don't want to.
#+begin_src js :cmd "org-babel-node" :results output drawer
let arr = [1, 2];
let [x, y] = arr;
console.log(x);
console.log(y);
#+end_src
:RESULTS:
1
2
:END:
ancs-example-on-blend-nano
I decided to try getting Apple Notification Center Service (ANCS) working with the RedBearLab BLE Nano, and this post describes one way to get the Nordic ANCS demo running on a BLE Nano. This post shows how to get an ARM development environment and toolchain up and running on OS X in order to develop apps for the Nordic BLE SoC's (e.g. nRF51822 based boards like the BLE Nano). | 2018-10-18 20:05:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2840489447116852, "perplexity": 2422.358319375669}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512014.61/warc/CC-MAIN-20181018194005-20181018215505-00269.warc.gz"} |
https://www.wikijob.co.uk/aptitude-tests/test-types/praxis-tests | Praxis Tests Prep Tips & Example Practice Questions
# Praxis Tests Prep Tips & Example Practice Questions
## What Are Praxis Tests?
Praxis tests are the group of pre-professional skills tests required as part of the teacher training college admissions process in the US.
They are a critical part of the teaching licensing process in many states, and most colleges will use them to assess the suitability of candidates for teaching.
All trainee teachers will be required to take at least one Praxis test, either before entry to teacher training college, during their college career or before entering the workforce as a student-teacher.
The Praxis tests are designed to assess the core academic skills (reading, writing and math) that the teaching profession requires, as well as the subject-specific knowledge required of a teacher.
There are three types of praxis test:
### 1. Praxis® Core Academic Skills for Educators (Core)
This test measures skills in reading, writing and mathematics for all trainee teachers, and is split into three separate papers.
### 2. Praxis® Subject Assessments
This test measures subject-specific knowledge for high school teachers, as well as their general teaching skills and skills which may be specific to the subject they are training to teach at K–12 level.
There are over 90 different Praxis subject assessment tests, and the ones you are required to take will depend on the state or licensing body you are applying to.
There is a Praxis Braille subject knowledge test, which is delivered in person.
### 3. Praxis® Content Knowledge
This test measures the subject-specific content knowledge required for teaching in elementary school.
Similar to the Praxis subject assessment for High School teachers, it measures the broad range of subject knowledge and pedagogy required by elementary school teachers.
Some Praxis tests are required to graduate from teacher training. All prospective teachers will have to pass the first Praxis® Core Academic Skills for Educators (Core).
Prospective high school teachers may also have to pass a praxis subject assessment, whilst prospective elementary school teachers may have to pass the praxis content knowledge test.
Each state has its own testing requirements.
Praxis Tests
## What Is Being Assessed?
As already mentioned, Praxis® Core Academic Skills for Educators (Core) is divided into three separate tests:
• Writing
• Mathematics
The reading test requires the integration and analysis of multiple teaching documents and assesses a candidate’s ability to extract and synthesize information from long passages of text.
This skill is essential during teacher training college to gain a full understanding of pedagogy and theoretical teaching practice, and also forms a key part of the teaching profession.
The writing test is split into two essay tasks:
• One explanatory writing task that assesses a candidate’s ability to present verbal information clearly.
• One argumentative writing task that assesses a candidate’s persuasive writing ability.
These are both skills that will be essential to a candidate’s success at teacher training college.
The mathematics test includes questions with selected response answers and numerical entry.
An on-screen calculator is provided to ensure the test properly measures a candidate’s mathematical reasoning, without a fair assessment becoming obscured by simple arithmetic mistakes.
The Praxis® Subject Assessments measure subject-specific teaching skills and knowledge required of K–12 teachers.
They include selected-response questions (multiple-choice) as well as short answer essay questions.
Subject assessments include more questions on the principles of learning and teaching at one of four levels:
• Early childhood
• K–6
• 5–9
• 7–12
This ensures candidates entering K–12 teaching have a solid knowledge of child development.
There are over 90 tests and a different combination of tests will be required depending on where you are applying for teacher training.
You can check the state requirements for each praxis test on the ETS website.
## Question Formats and Type
### Praxis® Core Academic Skills Test – Mathematics
There are four types of Mathematics question:
• Number and Quantity – 17 questions, 30% of the test
• Algebra and Functions – 17 questions, 30% of test
• Geometry – 11 questions, 20% of the test
• Statistics and Probability – 11 questions, 20% of the test
The number of questions may vary slightly from test to test.
#### Number and Quantity Question Example:
Example Question
Jennifer drives 30 miles to the hairdresser. She then travels another 6 miles to get to the utility company and 12 miles to the dentist’s office.
Assuming the utility company and the dentist's office were both on her way home from the hairdresser, approximately how many miles will she still have to drive to make it home?
a) 8
b) 12
c) 18
d) 48
#### Algebra Question Example:
Example Question
In the following equation, what is the value of 'x'?
13 – x = 78
a) –65
b) 65
c) 93
d) 5.2
e) –93
#### Probability and Statistics Question Example:
Probability and Statistics
Example Question
What is the probability of spinning a 'C' on the spinner above?
a) 1/8
b) 1/4
c) 1/3
d) 1/2
#### Geometry Question Example:
Geometry Question Example
Example Question
What is the area of this rectangle?
a) 15 ft
b) 9 ft
c) 12 ft
d) 18 ft
The Praxis® Core Academic Skills: Reading test is divided into three categories of questions:
• Key Ideas and Details – 17 to 22 questions, or 35% of the test
• Craft, Structure and Language Skills – 14 to 19 questions, or 30% of the test
• Integration of Knowledge and Ideas – 17 to 22 questions, or 35% of the test
Each of these subcategories will present a passage of text, followed by three selected-response questions relating to that passage.
#### Example Question:
Example Question
The opera Madame Butterfly by Giacomo Puccini was written in the early 20th century. The opera is about an American naval officer who is stationed in Japan. He meets and falls helplessly in love with Butterfly, a geisha. However, the love story is interrupted by his return to America. Before he leaves, he makes a promise to Butterfly to come back to Japan to marry her one day. However, when he does return three years later, she is shocked to see that he is joined by his American wife. Humiliated and heartbroken, Butterfly stabs herself. The American officer begs her to forgive him as she dies in his arms.
This passage describes the characters’ feelings in Madame Butterfly as all of the following except:
a) romantic
b) poignant
c) crushing
d) tragic
e) musical
### Praxis® Core Academic Skills Test – Writing
The Praxis writing test has two subcategories of question:
• Text Types, Purposes and Production – 6 to 12 selected-response questions and two essays, 60% of the test
• Language and Research Skills – 28 to 34 selected-response questions and short answer writing, 40% of the test
#### Example Selected-Response Question:
Detailed explanations of question examples can be found in the ETS Praxis reading study guide.
Example Question
Which of the following is a flavor made from beans?
a) vanilla
b) basil
c) raspberry
d) pistachio
#### Essay Question Example:
“Minimum wage employers exploit people who need a job. Minimum wage jobs provide no opportunity for learning or progression and are dull and repetitive, and there is no incentive for employers to provide development for them. People in minimum wage jobs are therefore locked into low-paying work. A higher minimum wage could be an incentive for employers to find better-trained staff, as well as to provide better training for their staff for work to be carried out at a higher production rate and increase employee satisfaction.”
Discuss the extent to which you agree, disagree or are indifferent to this opinion, using supporting examples from your observation, experience or understanding.
You will have 30 minutes to write an argumentative essay that evaluates this statement fairly.
### Praxis® Subject Assessment
There are over 90 Praxis® Subject Assessment tests. Example questions for each one can be found in the specific study guide for the test, published by ETS.
Praxis teaching tests
## What to Expect When Taking the Praxis Tests
Except for the Praxis Braille test, all Praxis tests are delivered on a computer. They are delivered globally and in many different languages.
You can find out where your local test centers are and how to register on the ETS website.
Registration takes place through your Praxis account. It is important to read the Praxis information bulletin before you register and ensure you know exactly which tests your state and licensing body require you to take.
Your math test will include an onscreen calculator, so there is no need to bring your own, but you are allowed to if you prefer. You are not allowed to bring your own stationery, including pens and pencils; these are provided by the test center.
It is important to check your test location 24 hours before your test by checking your praxis account, as test locations sometimes change.
Your admissions ticket isn’t always a requirement at a test center, but it is important to have a hard printed copy in case it is requested. Have this to hand, as well as your required ID documents when you arrive.
Your test appointment includes a 30-minute tutorial window, in which you can familiarize yourself with the computer system. Make use of this, and arrive well in advance of your test.
Before taking your subject-specific test, check the requirements carefully.
For example, The Art: Content and Analysis test (5135) requires you to upload images of your work at least three days in advance of the test. It is recommended also to bring printed hard copies of these to the testing center.
## How the Test Is Scored
The Praxis test score report cards indicate if you passed, the range of possible scores, the raw points achieved in comparison to the raw points available, and the range of possible scores, middle and 50% scores.
Every select-response question is worth one point, whilst the value of short and long answer essay questions vary.
The tests are reviewed both by real scorers and an impartial crater bot, developed by ETS to fairly assess and compare the wide range of Praxis tests.
Raw scores are then converted to the points scale. This is because ETS publishes several, varying versions of the same test each year, so the scaled score metric allows for comparison between different forms of the test.
The scale conversion policy is reviewed regularly by ETS, so the best way to understand what your overall score really means is to review their latest FAQ documents on scoring.
There is a useful example of a praxis score card published by ETS, annotated to show the meaning of each section.
Depending on which universities or licensing bodies you have applied to, the score card will compare your highest score with the score required by that institution, with a clear pass or fail indication.
The score card also groups questions into content categories, so it is clear which types of questions a candidate has done well at and which might need further work.
If a candidate needs to resit a test, this, as well as the difference between raw points acquired and raw points available, provides the opportunity for a candidate to fairly assess what they need to improve on before they retake the test.
Some praxis tests offer unofficial score estimates immediately on completion of the test which indicate how likely it is that you have passed.
Praxis tests are offered continuously, and most score cards are made available within 21 days of taking the test. There are some tests offered only on specific dates, and these have varying score release dates.
## Praxis Tests: Tips for Success
• The praxis core test can either be taken as three separate tests or as one continuous test. If taken separately, each test lasts two hours; but if taken together, the whole test lasts five hours. We recommend taking the three separate tests to allow for recovery, increased focus and extra time to settle into the test center. The extra time in the separate tests only accounts for the twenty-minute induction to the test, which will be taken three times.
• Make use of the ETS study companions. ETS produces a detailed study guide for every paper, including all 90 subject assessments. The study guides include practice questions, answers and question explanations, and are the most thorough and up to date resources you will access.
• Use other free training available, including the Khan Academy practice tests.
• Answer every question, even if you are guessing. You cannot lose points for submitting incorrect answers.
## Final Thoughts
By comparison to other pre-professional tests, the Praxis tests are a demanding experience.
Every prospective teacher will have to spend at least seven hours undertaking the tests, and the pass marks, although variable, are generally very high, leaving little room for mistakes.
Although you can resit each test up to four times, they are expensive, each with a registration fee of $50 to$150, so it is worth getting it right the first time.
Some K–12 teachers will have to take multiple subject-specific Praxis tests. Even with only one set of resists, it would be possible to run up a four-figure testing bill.
As Praxis tests cover such a wide range of skills and knowledge, it is worth going into the test fully prepared.
There is a wealth of information and preparatory material available created by ETS specifically for each test. Make good use of these, as well as the wealth of other free resources that are available for the Praxis test, and you will set yourself up for success.
As with all timed tests, practice against the clock, familiarise yourself with the format and arrive rehearsed on the day. | 2022-06-25 19:18:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23303304612636566, "perplexity": 3211.941880795409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00786.warc.gz"} |
http://mathhelpforum.com/discrete-math/148130-using-combinatorics-identities-print.html | # Using combinatorics identities
Printable View
• June 7th 2010, 08:04 AM
oldguynewstudent
Using combinatorics identities
Again just to make sure I understand what I'm doing:
What is the sum of the numbers in row 15 (i.e. n=15) of Pascal's triangle?
(Answer this without computing row 15.)
$2^n = \sum_{k=0}^{n}\left({n\atop k}\right)$ for all n >= 0.
So the answer would be $2^{15}$ correct?
• June 7th 2010, 08:12 AM
undefined
Quote:
Originally Posted by oldguynewstudent
Again just to make sure I understand what I'm doing:
What is the sum of the numbers in row 15 (i.e. n=15) of Pascal's triangle?
(Answer this without computing row 15.)
$2^n = \sum_{k=0}^{n}\left({n\atop k}\right)$ for all n >= 0.
So the answer would be $2^{15}$ correct?
Correct! And don't forget the first row is row 0. | 2015-09-04 22:29:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108791947364807, "perplexity": 1288.8289028550319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00052-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-2nd-edition/chapter-3-derivatives-3-4-the-product-and-quotient-rules-3-4-exercises-page-160/41 | ## Calculus: Early Transcendentals (2nd Edition)
$g'(t) = \frac{-3}{t^2} - \frac{2}{t^3}$
First Expand Fraction Into a Sum of Three Fractions $g(t) = \frac{t^3+3t^2+t}{t^3} = 1 + 3t^{-1}+t^{-2}$ Power Rule: $g'(t) = 1(0)t^{(0-1)} + 3(-1)t^{(-1-1)}+1(-2)t^{(-2-1)} = -3t^{-2}-2t^{-3} = \frac{-3}{t^2} - \frac{2}{t^3}$ | 2018-04-25 04:35:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7658352255821228, "perplexity": 1392.8962820577956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947693.49/warc/CC-MAIN-20180425041916-20180425061916-00396.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Ayang.yu | ## Yang, Yu
Compute Distance To:
Author ID: yang.yu Published as: Yang, Yu; Yang, Yu.; Yang, Y. External Links: ORCID
Documents Indexed: 183 Publications since 1990 Co-Authors: 148 Co-Authors with 113 Joint Publications 4,319 Co-Co-Authors
all top 5
### Co-Authors
21 single-authored 15 Zhou, Jinling 7 Liu, Hongbo 7 Xu, Yancong 7 Zhang, Tonghua 5 Chen, Wencheng 4 Hsu, Cheng-Hsiung 4 Wu, Yonghong 4 Xiao, Dongmei 4 Zhang, Cuimei 4 Zou, Lan 3 Brockwell, Peter J. 3 Cheng, Junsheng 3 Davis, Richard A. 3 Fu, Hongsun 3 Gao, Xiaoguang 3 Gopalakrishnan, Ganesh Lalitha 3 Guo, Zhigao 3 Liu, Shican 3 Ma, Xinsheng 3 Meng, Fanwei 3 Rockwell, Donald 3 Tang, Wai-Shing 3 Tang, Xiaomin 3 Yu, Dejie 3 Zhang, Shengliang 3 Zhang, Xiaodong 2 Cao, Jiayi 2 Chen, Daqing 2 Chung, Jacob N. 2 Crowe, Clayton T. 2 Gross, Laura K. 2 Gupta, Aarti 2 Li, Xiaoxiao 2 Phillips, Timothy N. 2 Qian, Weiyi 2 Ren, Hao 2 Ruan, Shigui 2 Takeuchi, Yasuhiro 2 Tang, Jiashi 2 Traverso, L. 2 Troutt, T. R. 2 Wang, Hongjie 2 Wiwatanapataphee, Benchawan 2 Xu, Shoujun 2 Yang, Hongqiang 2 Yang, Ju 2 Ye, Jin 2 Yu, Jun 2 Zhang, Xiufen 1 Adcock, Thomas A. A. 1 Askari, D. 1 Boland, Natashia L. 1 Bondeson, Anders 1 Brambleby, R. 1 Che, Bichen 1 Chen, Kewang 1 Chen, Xiaofang 1 Cheng, M. K. 1 Cheng, Yang 1 Cherif, Alhaji 1 Chiclana, Francisco 1 Cho, Minhyung 1 Chou, Ching-Tsun 1 Cui, Tao 1 Dai, Hongwei 1 Défago, Xavier 1 Di, Ruohai 1 Ding, Zhihui 1 Dong, Yueping 1 Draycott, Samuel 1 Dubljevic, Stevan S. 1 Erera, Alan L. 1 Fallah, Arash S. 1 Fan, Aiwan 1 Fang, Hui 1 Feng, Shigang 1 Foliente, G. C. 1 Ge, Xiangyu 1 Gohari, Peyman 1 Grandt, Morten 1 Guo, Yaxiao 1 Han, Bo 1 Hattaf, Khalid 1 He, Chu-chao 1 He, Yanxiang 1 Heyde, Christopher Charles 1 Hu, Zhengming 1 Huang, Minyi 1 Jiang, Xiaoyu 1 Jiang, Xu 1 Jiang, Xunyan 1 Jin, Meng-yuan 1 Jin, Tieying 1 Kahlon, Vineet 1 Kahn, Jeff D. 1 Karniadakis, George Em 1 Kerr, Matthew 1 Kirby, Robert Mike 1 Koch, Karl-Rudolf 1 Lam, N. T. K. ...and 94 more Co-Authors
all top 5
### Serials
10 Applied Mathematics and Computation 6 Discrete and Continuous Dynamical Systems. Series B 5 Journal of Geodesy 4 Journal of Fluid Mechanics 4 Chaos, Solitons and Fractals 3 Computers & Mathematics with Applications 3 Discrete Applied Mathematics 3 Mathematical Methods in the Applied Sciences 3 Journal of Optimization Theory and Applications 3 Mathematics in Practice and Theory 3 Acta Mathematica Hungarica 3 Nonlinear Analysis. Real World Applications 2 International Journal of Multiphase Flow 2 International Journal for Numerical Methods in Fluids 2 International Journal of Solids and Structures 2 Journal of Applied Mechanics 2 Journal of Mathematical Analysis and Applications 2 Journal of Algebra 2 Journal of Applied Probability 2 Journal of Computational and Applied Mathematics 2 Theoretical Computer Science 2 International Journal of Approximate Reasoning 2 Applied Mathematics Letters 2 International Journal of Computer Mathematics 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Applied Mathematics. Series A (Chinese Edition) 2 Pure and Applied Mathematics 2 Journal of Combinatorial Optimization 2 Mechanism and Machine Theory 2 Journal of Systems Science and Complexity 2 Quantum Information Processing 2 RIMS Kôkyûroku Bessatsu 2 International Journal of Biomathematics 2 Journal of Applied Analysis and Computation 1 Analysis Mathematica 1 Applicable Analysis 1 Communications in Algebra 1 Computers and Fluids 1 Communications in Mathematical Physics 1 International Journal of Systems Science 1 Inverse Problems 1 Journal of Engineering Mathematics 1 Journal of the Mechanics and Physics of Solids 1 Mathematical Biosciences 1 Wave Motion 1 Annales Polonici Mathematici 1 Ars Combinatoria 1 Biometrika 1 International Journal for Numerical Methods in Engineering 1 Kodai Mathematical Journal 1 Mathematics and Computers in Simulation 1 Mathematical Journal of Okayama University 1 Mathematische Zeitschrift 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Publications of the Research Institute for Mathematical Sciences, Kyoto University 1 Journal of Sichuan University. Natural Science Edition 1 Systems & Control Letters 1 Journal of Xi’an Jiaotong University 1 Bulletin of the Korean Mathematical Society 1 Combinatorica 1 Chinese Annals of Mathematics. Series B 1 International Journal of Production Research 1 Acta Automatica Sinica 1 Graphs and Combinatorics 1 Annals of Differential Equations 1 Journal of Biomathematics 1 Computers & Operations Research 1 International Journal of Intelligent Systems 1 Signal Processing 1 IEEE Transactions on Signal Processing 1 Acta Mathematica Universitatis Comenianae. New Series 1 Applied Intelligence 1 Numerical Algorithms 1 Applied Mathematical Modelling 1 European Journal of Operational Research 1 Journal of Physics A: Mathematical and General 1 Linear Algebra and its Applications 1 SIAM Journal on Applied Mathematics 1 Applicable Algebra in Engineering, Communication and Computing 1 Archive of Applied Mechanics 1 SIAM Journal on Optimization 1 Formal Methods in System Design 1 Applied Mathematics. Series B (English Edition) 1 Journal of Inverse and Ill-Posed Problems 1 Selecta Mathematica. New Series 1 Statistica Sinica 1 Engineering Analysis with Boundary Elements 1 Journal of Difference Equations and Applications 1 Science in China. Series E 1 European Journal of Control 1 Abstract and Applied Analysis 1 Far East Journal of Applied Mathematics 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Discrete Dynamics in Nature and Society 1 Journal of Modern Optics 1 Journal of Liaoning Normal University. Natural Science Edition 1 Journal of Zhejiang University. Engineering Science 1 Dynamical Systems 1 Advances and Applications in Statistics 1 Journal of Applied Mathematics ...and 18 more Serials
all top 5
### Fields
49 Biology and other natural sciences (92-XX) 29 Ordinary differential equations (34-XX) 18 Partial differential equations (35-XX) 18 Computer science (68-XX) 17 Dynamical systems and ergodic theory (37-XX) 14 Numerical analysis (65-XX) 13 Operations research, mathematical programming (90-XX) 12 Combinatorics (05-XX) 12 Fluid mechanics (76-XX) 11 Algebraic geometry (14-XX) 11 Systems theory; control (93-XX) 10 Mechanics of deformable solids (74-XX) 10 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 9 Probability theory and stochastic processes (60-XX) 9 Statistics (62-XX) 7 Quantum theory (81-XX) 5 Number theory (11-XX) 5 Geophysics (86-XX) 5 Information and communication theory, circuits (94-XX) 4 Mechanics of particles and systems (70-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Nonassociative rings and algebras (17-XX) 3 Difference and functional equations (39-XX) 2 General and overarching topics; collections (00-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Associative rings and algebras (16-XX) 2 Functional analysis (46-XX) 2 Operator theory (47-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Optics, electromagnetic theory (78-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Mathematical logic and foundations (03-XX) 1 $$K$$-theory (19-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral transforms, operational calculus (44-XX) 1 Convex and discrete geometry (52-XX) 1 Relativity and gravitational theory (83-XX)
### Citations contained in zbMATH Open
110 Publications have been cited 673 times in 408 Documents Cited by Year
Global dynamics of a delayed within-host viral infection model with both virus-to-cell and cell-to-cell transmissions. Zbl 1331.34160
Yang, Yu; Zou, Lan; Ruan, Shigui
2015
Global stability of a diffusive virus dynamics model with general incidence function and time delay. Zbl 1371.92118
Connell McCluskey, C.; Yang, Yu
2015
Estimation for nonnegative Lévy-driven Ornstein-Uhlenbeck processes. Zbl 05246106
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu
2007
Global stability of an age-structured virus dynamics model with Beddington-DeAngelis infection function. Zbl 1330.35480
Yang, Yu; Ruan, Shigui; Xiao, Dongmei
2015
On defining long-range dependence. Zbl 0912.60050
Heyde, C. C.; Yang, Y.
1997
Nonstandard finite difference scheme for a diffusive within-host virus dynamics model with both virus-to-cell and cell-to-cell transmissions. Zbl 1359.92112
Yang, Yu; Zhou, Jinling; Ma, Xinsheng; Zhang, Tonghua
2016
Estimation for non-negative Lévy-driven CARMA processes. Zbl 1214.62091
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu
2011
Continuous-time Gaussian autoregression. Zbl 1145.62070
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu.
2007
Influence of latent period and nonlinear incidence rate on the dynamics of SIRS epidemiological models. Zbl 1187.34121
Yang, Yu; Xiao, Dongmei
2010
Hopf bifurcation in a two-competitor, one-prey system with time delay. Zbl 1181.34090
Yang, Yu
2009
Globally convergent optimization algorithms on Riemannian manifolds: Uniform framework for unconstrained and constrained optimization. Zbl 1153.90017
Yang, Y.
2007
Threshold dynamics of a diffusive SIRI model with nonlinear incidence rate. Zbl 1420.92115
Yang, Yu; Zhou, Jinling; Hsu, Cheng-Hsiung
2019
Maximum atom-bond connectivity index with given graph parameters. Zbl 1346.05145
Zhang, Xiu-Mei; Yang, Yu; Wang, Hua; Zhang, Xiao-Dong
2016
Stability and bifurcation in a simplified five-neuron BAM neural network with delays. Zbl 1198.34170
Yang, Yu; Ye, Jin
2009
$$H_{\infty }$$ controller synthesis for pendulum-like systems. Zbl 1157.93375
Yang, Y.; Huang, Lin
2003
Geometrically nonlinear analysis of cylindrical shells using the element-free kp-Ritz method. Zbl 1195.74305
Zhao, X.; Yang, Y.; Liew, K. M.
2007
A mathematical model with delays for schistosomiasis japonicum transmission. Zbl 1213.34103
Yang, Yu; Xiao, Dongmei
2010
Global stability of a discrete virus dynamics model with Holling type-II infection function. Zbl 1338.39028
Yang, Yu; Ma, Xinsheng; Li, Yahui
2016
Sparse envelope model: efficient estimation and response variable selection in multivariate linear regression. Zbl 07072139
Su, Z.; Zhu, G.; Chen, X.; Yang, Y.
2016
Global dynamics of a reaction-diffusion waterborne pathogen model with general incidence rate. Zbl 1391.37061
Zhou, Jinling; Yang, Yu; Zhang, Tonghua
2018
Recurrent point set of the shift on $${\varSigma}$$ and strong chaos. Zbl 1067.37011
Wang, Lidong; Liao, Gongfu; Yang, Yu
2002
3-D container packing heuristics. Zbl 1085.90050
Lim, A.; Rodrigues, B.; Yang, Y.
2005
On existence of multiple positive solutions for $$\phi$$-Laplacian multipoint boundary value. Zbl 1173.34315
Yang, Yu; Xiao, Dongmei
2009
Adaptively robust filtering for kinematic geodetic positioning. Zbl 1036.86513
Yang, Y.; He, H.; Xu, G.
2001
All 2-positive linear maps from $$M_3(\mathbb{C})$$ to $$M_3(\mathbb{C})$$ are decomposable. Zbl 1383.81028
Yang, Yu; Leung, Denny H.; Tang, Wai-Shing
2016
Subtrees of spiro and polyphenyl hexagonal chains. Zbl 1410.92178
Yang, Yu; Liu, Hongbo; Wang, Hua; Fu, Hongsun
2015
Synchronization and parameter identification of high-dimensional discrete chaotic systems via parametric adaptive control. Zbl 1091.93036
Yang, Yu; Ma, Xikui; Zhang, Hao
2006
Enumeration of BC-subtrees of trees. Zbl 1311.05085
Yang, Yu; Liu, Hongbo; Wang, Hua; Makeig, Scott
2015
Global dynamics of an age-structured viral infection model with general incidence function and absorption. Zbl 1403.35303
Hattaf, Khalid; Yang, Yu
2018
Experiments on particle dispersion in a plane wake. Zbl 1137.76792
Yang, Y.; Crowe, C. T.; Chung, J. N.; Troutt, T. R.
2000
Global dynamics of a discrete viral infection model with time delay, virus-to-cell and cell-to-cell transmissions. Zbl 1383.37074
Zhou, Jinling; Yang, Yu
2017
Global stability of a discrete multigroup SIR model with nonlinear incidence rate. Zbl 1384.92061
Zhou, Jinling; Yang, Yu; Zhang, Tonghua
2017
Two computationally efficient polynomial-iteration infeasible interior-point algorithms for linear programming. Zbl 1406.90075
Yang, Y.
2018
A multiquadric quasi-interpolations method for CEV option pricing model. Zbl 1418.91606
Zhang, Shengliang; Yang, Hongqiang; Yang, Yu
2019
Intuitionistic fuzzy sets: Spherical representation and distances. Zbl 1163.03031
Yang, Y.; Chiclana, F.
2009
Stability of a fractional order SEIR model with general incidence. Zbl 1436.34054
Yang, Yu; Xu, Liguang
2020
Traveling waves for a nonlocal dispersal SIR model with general nonlinear incidence rate and spatio-temporal delay. Zbl 1360.92124
Zhou, Jinling; Yang, Yu
2017
Linear matrix inequalities (LMIs) observer and controller design synthesis for parabolic PDE. Zbl 1360.93248
Yang, Yu; Dubljevic, Stevan
2014
Wave-structure interaction: simulation driven by quantitative imaging. Zbl 1041.76058
Sirisup, S.; Karniadakis, G. E.; Yang, Y.; Rockwell, D.
2004
On the crosscorrelation of sequences with the decimation factor $$d=\frac{p^n+1}{p+1}-\frac{p^n-1}{2}$$. Zbl 1015.94006
Hu, Z.; Li, X.; Mills, D.; Müller, E.; Sun, W.; Willems, W.; Yang, Y.; Zhang, Z.
2001
Long-concave functions and poset probabilities. Zbl 0928.52006
Kahn, Jeff; Yang, Yu
1998
The expected subtree number index in random polyphenylene and spiro chains. Zbl 1447.05059
Yang, Yu; Sun, Xiao-Jun; Cao, Jia-Yi; Wang, Hua; Zhang, Xiao-Dong
2020
A multi-moment transport model on cubed-sphere grid. Zbl 1241.76320
Chen, C. G.; Xiao, F.; Li, X. L.; Yang, Y.
2011
Dynamics of a waterborne pathogen model with spatial heterogeneity and general incidence rate. Zbl 1431.35215
Yang, Yu; Zou, Lan; Zhou, Jinling; Hsu, Cheng-Hsiung
2020
Dynamical analysis of a diffusive SIRS model with general incidence rate. Zbl 1443.37064
Yang, Yu; Zou, Lan; Zhang, Tonghua; Xu, Yancong
2020
Generalised Clark-Ocone formulae for differential forms. Zbl 1331.60106
Yang, Y.
2012
Stability and Hopf bifurcation of a delayed virus infection model with Beddington-DeAngelis infection function and cytotoxic T-lymphocyte immune response. Zbl 1337.34086
Yang, Yu
2015
Robust estimator for correlated observations based on bifactor equivalent weights. Zbl 1158.86340
Yang, Y.; Song, L.; Xu, T.
2002
Application of a modified Lindstedt-Poincaré method in coupled TDOF systems with quadratic nonlinearity and a constant external excitation. Zbl 1168.70302
Lim, C. W.; Lai, S. K.; Wu, B. S.; Sun, W. P.; Yang, Y.; Wang, C.
2009
Wave interaction with a vertical cylinder: Spanwise flow patterns and loading. Zbl 0993.76515
Yang, Y.; Rockwell, D.
2002
Robust Kalman filter for rank deficient observation models. Zbl 0999.86013
Koch, K. R.; Yang, Y.
1998
Global stability of an SEIQV epidemic model with general incidence rate. Zbl 1318.34069
Yang, Yu; Zhang, Cuimei; Jiang, Xunyan
2015
Continuum theory and phase-field simulation of magnetoelectric effects in multiferroic bismuth ferrite. Zbl 1200.74055
Li, L. J.; Yang, Y.; Shu, Y. C.; Li, J. Y.
2010
On the admissible fundamental groups of curves over algebraically closed fields of characteristic $$p>0$$. Zbl 1439.14102
Yang, Yu
2018
Schmidt number of bipartite and multipartite states under local projections. Zbl 1373.81054
Chen, Lin; Yang, Yu; Tang, Wai-Shing
2017
Numerical simulation of water entry with improved SPH method. Zbl 1404.76208
Shao, J. R.; Yang, Y.; Gong, H. F.; Liu, M. B.
2019
Global dynamics of a latent HIV infection model with general incidence function and multiple delays. Zbl 1405.34069
Yang, Yu; Dong, Yueping; Takeuchi, Yasuhiro
2019
Wave propagation in two-dimensional anisotropic acoustic metamaterials of K4 topology. Zbl 1467.74043
Fallah, A. S.; Yang, Y.; Ward, R.; Tootkaboni, M.; Brambleby, R.; Louhghalam, A.; Louca, L. A.
2015
On spiro and polyphenyl hexagonal chains with respect to the number of BC-subtrees. Zbl 1362.05038
Yang, Yu; Liu, Hongbo; Wang, Hua; Sun, Shichang
2017
A proximal iteratively regularized Gauss-Newton method for nonlinear inverse problems. Zbl 1365.65241
Fu, Hongsun; Liu, Hongbo; Han, Bo; Yang, Yu; Hu, Yi
2017
Flow past a rotationally oscillating cylinder. Zbl 1294.76027
Kumar, S.; Lopez, C.; Probst, O.; Francisco, G.; Askari, D.; Yang, Y.
2013
A microstructure-based description for cyclic plasticity of pearlitic steel with experimental verification. Zbl 0988.74504
Peng, X.; Fan, J.; Yang, Y.
2002
Shape optimization for radar cross sections by a gradient method. Zbl 1074.78006
Bondeson, A.; Yang, Y.; Weinerfelt, P.
2004
Prescribing zeros and poles on a compact Riemann surface for a gravitationally coupled Abelian gauge field theory. Zbl 1077.83061
Yang, Y.
2004
Traveling waves for a nonlocal dispersal vaccination model with general incidence. Zbl 1432.37120
Zhou, Jinling; Yang, Yu; Hsu, Cheng-Hsiung
2020
Analysis of a model with multiple infectious stages and arbitrarily distributed stage durations. Zbl 1337.92222
Yang, Y.; Xu, D.; Feng, Z.
2008
Robust estimation of geodetic datum transformation. Zbl 1001.86507
Yang, Y.
1999
Comparison study of dynamics in one-sided and two-sided solid-combustion models. Zbl 1210.35299
Yang, Y.; Gross, L. K.; Yu, J.
2010
Global stability of a diffusive and delayed virus dynamics model with Beddington-DeAngelis incidence function and CTL immune response. Zbl 1443.92113
Yang, Yu; Xu, Yancong
2016
Global convergence of a class of discrete-time interconnected pendulum-like systems. Zbl 1145.93033
Yang, Y.; Duan, Z. S.; Huang, L.
2007
Implementation of supervisory control using extended finite-state machines. Zbl 1156.93302
Yang, Y.; Mannani, A.; Gohari, P.
2008
Learning Bayesian network parameters from small data sets: a further constrained qualitatively maximum a posteriori method. Zbl 1419.68075
Guo, Zhi-gao; Gao, Xiao-guang; Ren, Hao; Yang, Yu; Di, Ruo-hai; Chen, Da-qing
2017
A delayed virus infection model with cell-to-cell transmission and CTL immune response. Zbl 1379.92030
Yang, Yu; Zhang, Tonghua; Xu, Yancong; Zhou, Jinling
2017
Modeling and analysis of recurrent autoimmune disease. Zbl 1437.34063
Xu, Yancong; Yang, Yu; Meng, Fanwei; Yu, Pei
2020
On the averages of generalized Hasse-Witt invariants of pointed stable curves in positive characteristic. Zbl 1453.14084
Yang, Yu
2020
Mixed finite element methods for groundwater flow in heterogeneous aquifers. Zbl 1391.76755
Traverso, L.; Phillips, T. N.; Yang, Y.
2013
Pharmacokinetic model based on multifactor uncertain differential equation. Zbl 07332910
Liu, Z.; Yang, Y.
2021
A novel numerical algorithm based on self-tuning controller to support TCP flows. Zbl 1163.65047
Xiong, N.; Yang, L. T.; Yang, Y.; Défago, X.; He, Yan Xiang
2008
The subadditive topological pressure of a noncompact system. Zbl 1340.37011
Yang, Yu; Zhang, Xiufen
2015
On algorithms for enumerating BC-subtrees of unicyclic and edge-disjoint bicyclic graphs. Zbl 1332.05080
Yang, Yu; Liu, Hongbo; Wang, Hua; Feng, Shigang
2016
Robust estimation of systematic errors of satellite laser range. Zbl 1002.86503
Yang, Y.; Cheng, M. K.; Shum, C. K.; Tapley, B. D.
1999
A simulation approach for evaluation and improvement of organisational planning in collaborative product development projects. Zbl 1198.90262
Zhang, Xiaodong; Luo, Le; Yang, Yu; Li, Yingzi; Schlick, Christopher M.; Grandt, Morten
2009
Hopf bifurcation in a predator-prey system with discrete and distributed delays. Zbl 1198.34146
Yang, Yu; Ye, Jin
2009
On solutions of a system of rational difference equations. Zbl 1240.39032
Yang, Yu; Chen, Li; Shi, Yong-Guo
2011
Uniformly strong persistence of a nonlinear asymptotically periodic multispecies competition predator–prey system with general functional response. Zbl 1108.92048
Yang, Yu; Chen, Wencheng
2006
A displacement equivalence-based damage model for brittle materials. I: Theory. Zbl 1110.74685
Soh, C. K.; Liu, Y.; Yang, Y.; Dong, Y.
2003
A displacement equivalence-based damage model for brittle materials. II: Verification. Zbl 1110.74567
Liu, Y.; Soh, C. K.; Dong, Y.; Yang, Y.
2003
Quantitative behavior of non-integrable systems. I. Zbl 1474.37033
Beck, J.; Donders, M.; Yang, Y.
2020
Generalized Choi states and 2-distillability of quantum states. Zbl 1395.81029
Chen, Lin; Tang, Wai-Shing; Yang, Yu
2018
General parameterized time-frequency transform. Zbl 1394.94663
Yang, Y.; Peng, Z. K.; Dong, X. J.; Zhang, W. M.; Meng, G.
2014
Sub-additive topological pressure of proper maps. Zbl 1442.37030
Li, Zhiming; Ding, Zhihui; Yang, Yu
2018
Derivations of the Schrödinger algebra and their applications. Zbl 1430.17053
Yang, Yu; Tang, Xiaomin
2018
Dynamics of a diffusive vaccination model with nonlinear incidence. Zbl 1417.92098
Yang, Yu; Zhang, Shengliang
2018
Fast and accurate static data-race detection for concurrent programs. Zbl 1135.68368
Kahlon, Vineet; Yang, Yu; Sankaranarayanan, Sriram; Gupta, Aarti
2007
Application of frequency family separation method based upon EMD and local Hilbert energy spectrum method to gear fault diagnosis. Zbl 1342.70003
Cheng, Junsheng; Yu, Dejie; Tang, Jiashi; Yang, Yu
2008
Dynamic model checking with property driven pruning to detect race conditions. Zbl 1183.68383
Wang, Chao; Yang, Yu; Gupta, Aarti; Gopalakrishnan, Ganesh
2008
An explicit basis for the rational higher Chow groups of abelian number fields. Zbl 1388.14033
Kerr, Matthew; Yang, Yu
2018
Accelerated stochastic algorithms for nonconvex finite-sum and multiblock optimization. Zbl 1431.90120
Lan, Guanghui; Yang, Yu
2019
Global stability of VEISV propagation modeling for network worm attack. Zbl 1432.68015
Yang, Yu
2015
Multi-distance granularity structural $$\alpha$$-subtree index of generalized Bethe trees. Zbl 1428.05058
Yang, Yu; Fan, Ai-wan; Wang, Hua; Lv, Hailian; Zhang, Xiao-Dong
2019
Pharmacokinetic model based on multifactor uncertain differential equation. Zbl 07332910
Liu, Z.; Yang, Y.
2021
Raynaud-Tamagawa theta divisors and new-ordinariness of ramified coverings of curves. Zbl 07396018
Yang, Yu
2021
Stability of a fractional order SEIR model with general incidence. Zbl 1436.34054
Yang, Yu; Xu, Liguang
2020
The expected subtree number index in random polyphenylene and spiro chains. Zbl 1447.05059
Yang, Yu; Sun, Xiao-Jun; Cao, Jia-Yi; Wang, Hua; Zhang, Xiao-Dong
2020
Dynamics of a waterborne pathogen model with spatial heterogeneity and general incidence rate. Zbl 1431.35215
Yang, Yu; Zou, Lan; Zhou, Jinling; Hsu, Cheng-Hsiung
2020
Dynamical analysis of a diffusive SIRS model with general incidence rate. Zbl 1443.37064
Yang, Yu; Zou, Lan; Zhang, Tonghua; Xu, Yancong
2020
Traveling waves for a nonlocal dispersal vaccination model with general incidence. Zbl 1432.37120
Zhou, Jinling; Yang, Yu; Hsu, Cheng-Hsiung
2020
Modeling and analysis of recurrent autoimmune disease. Zbl 1437.34063
Xu, Yancong; Yang, Yu; Meng, Fanwei; Yu, Pei
2020
On the averages of generalized Hasse-Witt invariants of pointed stable curves in positive characteristic. Zbl 1453.14084
Yang, Yu
2020
Quantitative behavior of non-integrable systems. I. Zbl 1474.37033
Beck, J.; Donders, M.; Yang, Y.
2020
Nonlinear derivations of incidence algebras. Zbl 1474.16069
Yang, Y.
2020
The complexity of total edge domination and some related results on trees. Zbl 1466.05170
Pan, Zhuo; Yang, Yu; Li, Xianyue; Xu, Shou-Jun
2020
Threshold dynamics of a diffusive SIRI model with nonlinear incidence rate. Zbl 1420.92115
Yang, Yu; Zhou, Jinling; Hsu, Cheng-Hsiung
2019
A multiquadric quasi-interpolations method for CEV option pricing model. Zbl 1418.91606
Zhang, Shengliang; Yang, Hongqiang; Yang, Yu
2019
Numerical simulation of water entry with improved SPH method. Zbl 1404.76208
Shao, J. R.; Yang, Y.; Gong, H. F.; Liu, M. B.
2019
Global dynamics of a latent HIV infection model with general incidence function and multiple delays. Zbl 1405.34069
Yang, Yu; Dong, Yueping; Takeuchi, Yasuhiro
2019
Accelerated stochastic algorithms for nonconvex finite-sum and multiblock optimization. Zbl 1431.90120
Lan, Guanghui; Yang, Yu
2019
Multi-distance granularity structural $$\alpha$$-subtree index of generalized Bethe trees. Zbl 1428.05058
Yang, Yu; Fan, Ai-wan; Wang, Hua; Lv, Hailian; Zhang, Xiao-Dong
2019
Group-theoretic characterizations of almost open immersions of curves. Zbl 07060821
Yang, Yu
2019
Learning Bayesian network parameters via minimax algorithm. Zbl 1456.68146
Gao, Xiao-guang; Guo, Zhi-gao; Ren, Hao; Yang, Yu; Chen, Da-qing; He, Chu-chao
2019
Global dynamics of a reaction-diffusion waterborne pathogen model with general incidence rate. Zbl 1391.37061
Zhou, Jinling; Yang, Yu; Zhang, Tonghua
2018
Global dynamics of an age-structured viral infection model with general incidence function and absorption. Zbl 1403.35303
Hattaf, Khalid; Yang, Yu
2018
Two computationally efficient polynomial-iteration infeasible interior-point algorithms for linear programming. Zbl 1406.90075
Yang, Y.
2018
On the admissible fundamental groups of curves over algebraically closed fields of characteristic $$p>0$$. Zbl 1439.14102
Yang, Yu
2018
Generalized Choi states and 2-distillability of quantum states. Zbl 1395.81029
Chen, Lin; Tang, Wai-Shing; Yang, Yu
2018
Sub-additive topological pressure of proper maps. Zbl 1442.37030
Li, Zhiming; Ding, Zhihui; Yang, Yu
2018
Derivations of the Schrödinger algebra and their applications. Zbl 1430.17053
Yang, Yu; Tang, Xiaomin
2018
Dynamics of a diffusive vaccination model with nonlinear incidence. Zbl 1417.92098
Yang, Yu; Zhang, Shengliang
2018
An explicit basis for the rational higher Chow groups of abelian number fields. Zbl 1388.14033
Kerr, Matthew; Yang, Yu
2018
Biderivations of the higher rank Witt algebra without anti-symmetric condition. Zbl 1391.17015
Tang, Xiaomin; Yang, Yu
2018
Global dynamics of a discrete viral infection model with time delay, virus-to-cell and cell-to-cell transmissions. Zbl 1383.37074
Zhou, Jinling; Yang, Yu
2017
Global stability of a discrete multigroup SIR model with nonlinear incidence rate. Zbl 1384.92061
Zhou, Jinling; Yang, Yu; Zhang, Tonghua
2017
Traveling waves for a nonlocal dispersal SIR model with general nonlinear incidence rate and spatio-temporal delay. Zbl 1360.92124
Zhou, Jinling; Yang, Yu
2017
Schmidt number of bipartite and multipartite states under local projections. Zbl 1373.81054
Chen, Lin; Yang, Yu; Tang, Wai-Shing
2017
On spiro and polyphenyl hexagonal chains with respect to the number of BC-subtrees. Zbl 1362.05038
Yang, Yu; Liu, Hongbo; Wang, Hua; Sun, Shichang
2017
A proximal iteratively regularized Gauss-Newton method for nonlinear inverse problems. Zbl 1365.65241
Fu, Hongsun; Liu, Hongbo; Han, Bo; Yang, Yu; Hu, Yi
2017
Learning Bayesian network parameters from small data sets: a further constrained qualitatively maximum a posteriori method. Zbl 1419.68075
Guo, Zhi-gao; Gao, Xiao-guang; Ren, Hao; Yang, Yu; Di, Ruo-hai; Chen, Da-qing
2017
A delayed virus infection model with cell-to-cell transmission and CTL immune response. Zbl 1379.92030
Yang, Yu; Zhang, Tonghua; Xu, Yancong; Zhou, Jinling
2017
A meshless symplectic algorithm for nonlinear wave equation using highly accurate RBFs quasi-interpolation. Zbl 1427.65315
Zhang, Shengliang; Yang, Yu; Yang, Hongqiang
2017
Nonstandard finite difference scheme for a diffusive within-host virus dynamics model with both virus-to-cell and cell-to-cell transmissions. Zbl 1359.92112
Yang, Yu; Zhou, Jinling; Ma, Xinsheng; Zhang, Tonghua
2016
Maximum atom-bond connectivity index with given graph parameters. Zbl 1346.05145
Zhang, Xiu-Mei; Yang, Yu; Wang, Hua; Zhang, Xiao-Dong
2016
Global stability of a discrete virus dynamics model with Holling type-II infection function. Zbl 1338.39028
Yang, Yu; Ma, Xinsheng; Li, Yahui
2016
Sparse envelope model: efficient estimation and response variable selection in multivariate linear regression. Zbl 07072139
Su, Z.; Zhu, G.; Chen, X.; Yang, Y.
2016
All 2-positive linear maps from $$M_3(\mathbb{C})$$ to $$M_3(\mathbb{C})$$ are decomposable. Zbl 1383.81028
Yang, Yu; Leung, Denny H.; Tang, Wai-Shing
2016
Global stability of a diffusive and delayed virus dynamics model with Beddington-DeAngelis incidence function and CTL immune response. Zbl 1443.92113
Yang, Yu; Xu, Yancong
2016
On algorithms for enumerating BC-subtrees of unicyclic and edge-disjoint bicyclic graphs. Zbl 1332.05080
Yang, Yu; Liu, Hongbo; Wang, Hua; Feng, Shigang
2016
Global dynamics of a delayed within-host viral infection model with both virus-to-cell and cell-to-cell transmissions. Zbl 1331.34160
Yang, Yu; Zou, Lan; Ruan, Shigui
2015
Global stability of a diffusive virus dynamics model with general incidence function and time delay. Zbl 1371.92118
Connell McCluskey, C.; Yang, Yu
2015
Global stability of an age-structured virus dynamics model with Beddington-DeAngelis infection function. Zbl 1330.35480
Yang, Yu; Ruan, Shigui; Xiao, Dongmei
2015
Subtrees of spiro and polyphenyl hexagonal chains. Zbl 1410.92178
Yang, Yu; Liu, Hongbo; Wang, Hua; Fu, Hongsun
2015
Enumeration of BC-subtrees of trees. Zbl 1311.05085
Yang, Yu; Liu, Hongbo; Wang, Hua; Makeig, Scott
2015
Stability and Hopf bifurcation of a delayed virus infection model with Beddington-DeAngelis infection function and cytotoxic T-lymphocyte immune response. Zbl 1337.34086
Yang, Yu
2015
Global stability of an SEIQV epidemic model with general incidence rate. Zbl 1318.34069
Yang, Yu; Zhang, Cuimei; Jiang, Xunyan
2015
Wave propagation in two-dimensional anisotropic acoustic metamaterials of K4 topology. Zbl 1467.74043
Fallah, A. S.; Yang, Y.; Ward, R.; Tootkaboni, M.; Brambleby, R.; Louhghalam, A.; Louca, L. A.
2015
The subadditive topological pressure of a noncompact system. Zbl 1340.37011
Yang, Yu; Zhang, Xiufen
2015
Global stability of VEISV propagation modeling for network worm attack. Zbl 1432.68015
Yang, Yu
2015
Linear matrix inequalities (LMIs) observer and controller design synthesis for parabolic PDE. Zbl 1360.93248
Yang, Yu; Dubljevic, Stevan
2014
General parameterized time-frequency transform. Zbl 1394.94663
Yang, Y.; Peng, Z. K.; Dong, X. J.; Zhang, W. M.; Meng, G.
2014
Flow past a rotationally oscillating cylinder. Zbl 1294.76027
Kumar, S.; Lopez, C.; Probst, O.; Francisco, G.; Askari, D.; Yang, Y.
2013
Mixed finite element methods for groundwater flow in heterogeneous aquifers. Zbl 1391.76755
Traverso, L.; Phillips, T. N.; Yang, Y.
2013
Generalised Clark-Ocone formulae for differential forms. Zbl 1331.60106
Yang, Y.
2012
Estimation of response of plate structure subject to low veloctiy impact by a solid object. Zbl 1359.74282
Yang, Y.; Lam, N. T. K.; Zhang, L.
2012
Estimation for non-negative Lévy-driven CARMA processes. Zbl 1214.62091
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu
2011
A multi-moment transport model on cubed-sphere grid. Zbl 1241.76320
Chen, C. G.; Xiao, F.; Li, X. L.; Yang, Y.
2011
On solutions of a system of rational difference equations. Zbl 1240.39032
Yang, Yu; Chen, Li; Shi, Yong-Guo
2011
Influence of latent period and nonlinear incidence rate on the dynamics of SIRS epidemiological models. Zbl 1187.34121
Yang, Yu; Xiao, Dongmei
2010
A mathematical model with delays for schistosomiasis japonicum transmission. Zbl 1213.34103
Yang, Yu; Xiao, Dongmei
2010
Continuum theory and phase-field simulation of magnetoelectric effects in multiferroic bismuth ferrite. Zbl 1200.74055
Li, L. J.; Yang, Y.; Shu, Y. C.; Li, J. Y.
2010
Comparison study of dynamics in one-sided and two-sided solid-combustion models. Zbl 1210.35299
Yang, Y.; Gross, L. K.; Yu, J.
2010
Hopf bifurcation in a two-competitor, one-prey system with time delay. Zbl 1181.34090
Yang, Yu
2009
Stability and bifurcation in a simplified five-neuron BAM neural network with delays. Zbl 1198.34170
Yang, Yu; Ye, Jin
2009
On existence of multiple positive solutions for $$\phi$$-Laplacian multipoint boundary value. Zbl 1173.34315
Yang, Yu; Xiao, Dongmei
2009
Intuitionistic fuzzy sets: Spherical representation and distances. Zbl 1163.03031
Yang, Y.; Chiclana, F.
2009
Application of a modified Lindstedt-Poincaré method in coupled TDOF systems with quadratic nonlinearity and a constant external excitation. Zbl 1168.70302
Lim, C. W.; Lai, S. K.; Wu, B. S.; Sun, W. P.; Yang, Y.; Wang, C.
2009
A simulation approach for evaluation and improvement of organisational planning in collaborative product development projects. Zbl 1198.90262
Zhang, Xiaodong; Luo, Le; Yang, Yu; Li, Yingzi; Schlick, Christopher M.; Grandt, Morten
2009
Hopf bifurcation in a predator-prey system with discrete and distributed delays. Zbl 1198.34146
Yang, Yu; Ye, Jin
2009
Analysis of a model with multiple infectious stages and arbitrarily distributed stage durations. Zbl 1337.92222
Yang, Y.; Xu, D.; Feng, Z.
2008
Implementation of supervisory control using extended finite-state machines. Zbl 1156.93302
Yang, Y.; Mannani, A.; Gohari, P.
2008
A novel numerical algorithm based on self-tuning controller to support TCP flows. Zbl 1163.65047
Xiong, N.; Yang, L. T.; Yang, Y.; Défago, X.; He, Yan Xiang
2008
Application of frequency family separation method based upon EMD and local Hilbert energy spectrum method to gear fault diagnosis. Zbl 1342.70003
Cheng, Junsheng; Yu, Dejie; Tang, Jiashi; Yang, Yu
2008
Dynamic model checking with property driven pruning to detect race conditions. Zbl 1183.68383
Wang, Chao; Yang, Yu; Gupta, Aarti; Gopalakrishnan, Ganesh
2008
Estimation for nonnegative Lévy-driven Ornstein-Uhlenbeck processes. Zbl 05246106
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu
2007
Continuous-time Gaussian autoregression. Zbl 1145.62070
Brockwell, Peter J.; Davis, Richard A.; Yang, Yu.
2007
Globally convergent optimization algorithms on Riemannian manifolds: Uniform framework for unconstrained and constrained optimization. Zbl 1153.90017
Yang, Y.
2007
Geometrically nonlinear analysis of cylindrical shells using the element-free kp-Ritz method. Zbl 1195.74305
Zhao, X.; Yang, Y.; Liew, K. M.
2007
Global convergence of a class of discrete-time interconnected pendulum-like systems. Zbl 1145.93033
Yang, Y.; Duan, Z. S.; Huang, L.
2007
Fast and accurate static data-race detection for concurrent programs. Zbl 1135.68368
Kahlon, Vineet; Yang, Yu; Sankaranarayanan, Sriram; Gupta, Aarti
2007
Synchronization and parameter identification of high-dimensional discrete chaotic systems via parametric adaptive control. Zbl 1091.93036
Yang, Yu; Ma, Xikui; Zhang, Hao
2006
Uniformly strong persistence of a nonlinear asymptotically periodic multispecies competition predator–prey system with general functional response. Zbl 1108.92048
Yang, Yu; Chen, Wencheng
2006
3-D container packing heuristics. Zbl 1085.90050
Lim, A.; Rodrigues, B.; Yang, Y.
2005
Wave-structure interaction: simulation driven by quantitative imaging. Zbl 1041.76058
Sirisup, S.; Karniadakis, G. E.; Yang, Y.; Rockwell, D.
2004
Shape optimization for radar cross sections by a gradient method. Zbl 1074.78006
Bondeson, A.; Yang, Y.; Weinerfelt, P.
2004
Prescribing zeros and poles on a compact Riemann surface for a gravitationally coupled Abelian gauge field theory. Zbl 1077.83061
Yang, Y.
2004
Interaction of a deep-water wave with a vertical cylinder: flow structure and loading. Zbl 1060.76523
Yang, Y.; Rockwell, D.
2004
$$H_{\infty }$$ controller synthesis for pendulum-like systems. Zbl 1157.93375
Yang, Y.; Huang, Lin
2003
A displacement equivalence-based damage model for brittle materials. I: Theory. Zbl 1110.74685
Soh, C. K.; Liu, Y.; Yang, Y.; Dong, Y.
2003
A displacement equivalence-based damage model for brittle materials. II: Verification. Zbl 1110.74567
Liu, Y.; Soh, C. K.; Dong, Y.; Yang, Y.
2003
Recurrent point set of the shift on $${\varSigma}$$ and strong chaos. Zbl 1067.37011
Wang, Lidong; Liao, Gongfu; Yang, Yu
2002
Robust estimator for correlated observations based on bifactor equivalent weights. Zbl 1158.86340
Yang, Y.; Song, L.; Xu, T.
2002
Wave interaction with a vertical cylinder: Spanwise flow patterns and loading. Zbl 0993.76515
Yang, Y.; Rockwell, D.
2002
...and 10 more Documents
all top 5
### Cited by 657 Authors
32 Yang, Yu 29 Elaiw, Ahmed M. 12 Xu, Jinhu 11 Geng, Yan 10 Zhou, Jinling 8 Alshamrani, Noura H. 7 Alshaikh, Matuka A. 7 Rong, Libin 7 Yuan, Rong 6 Avila-Vales, Eric Jose 6 Miao, Hui 6 Wang, Jinliang 5 Das, Kinkar Chandra 5 Enatsu, Yoichi 5 Hattaf, Khalid 5 Hoang, Manh Tuan 5 Lv, Yunfei 5 Ma, Wanbiao 5 Teng, Zhi-dong 5 Wang, Lidong 5 Wang, Shaoli 5 Wang, Xia 5 Xu, Rui 5 Xu, Yancong 5 Xu, Zhiting 5 Zhang, Suxia 5 Zhang, Tonghua 4 Alade, Taofeek O. 4 Alsulami, Saud Mastour A. 4 Brockwell, Peter J. 4 Chen, Xiaodan 4 De Baets, Bernard 4 De Loof, Karel 4 De Meyer, Hans E. 4 Duan, Lian 4 Hou, Jiangyong 4 Hsu, Cheng-Hsiung 4 Kuniya, Toshikazu 4 Muroya, Yoshiaki 4 Nakata, Yukihiko 4 Pérez, Ángel G. C. 4 Takeuchi, Yasuhiro 4 Tang, Xiaosong 4 Wang, Wei 4 Zhang, Ran 4 Zhang, Xinsheng 3 Chen, Yuming 3 Cheng, Chang-Yuan 3 Du, Xiuli 3 Guo, Ting 3 Hobiny, Aatef D. 3 Huang, Lihong 3 Huo, Hai-Feng 3 Liu, Hongbo 3 Long, Hongwei 3 Masuda, Hiroki 3 Mercuri, Lorenzo 3 Pei, Yongzhen 3 Raezah, A. A. 3 Rohde, Victor Ulrich 3 Ruan, Shigui 3 Shen, Guangjun 3 Song, Xinyu 3 Wang, Fengquan 3 Wang, Yan 3 Yang, Junyuan 3 Yu, Qian 3 Zhang, Shibin 3 Zhang, Xiaodong 3 Zou, Lan 2 Abdelrazeq, Ibrahim 2 Abdurahman, Xamxinur 2 Afsharnezhad, Zahra 2 Al Agha, A. D. 2 Almatrafi, A. A. 2 Alofi, B. S. 2 Aminataei, Azim 2 Araneda, Axel A. 2 Benth, Fred Espen 2 Bentout, Soufiane 2 Browne, Cameron J. 2 Cai, Liming 2 Cao, Jiayi 2 Cao, Jinde 2 Cariello, Daniel 2 Chakrabarty, Siddhartha P. 2 Chen, Chaohui 2 Chu, Zhenyan 2 Dette, Holger 2 Dimitrov, Darko 2 Ding, Chunxiao 2 Djilali, Salih 2 Dogan, Abdulkadir 2 Dong, Yueping 2 Fasen, Vicky 2 Feng, Zhaosheng 2 Ferrazzano, Vincenzo 2 Fuchs, Florian 2 Gao, Jianguo 2 Gao, Shujing ...and 557 more Authors
all top 5
### Cited in 149 Serials
19 Applied Mathematics and Computation 19 International Journal of Biomathematics 17 Chaos, Solitons and Fractals 14 Computers & Mathematics with Applications 14 Discrete and Continuous Dynamical Systems. Series B 14 Advances in Difference Equations 13 Journal of Mathematical Analysis and Applications 12 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 12 Mathematical Biosciences and Engineering 9 Discrete Applied Mathematics 8 Nonlinear Dynamics 8 Nonlinear Analysis. Real World Applications 6 Mathematics and Computers in Simulation 6 Discrete Dynamics in Nature and Society 6 Journal of Applied Mathematics and Computing 6 Journal of Biological Dynamics 5 Statistics & Probability Letters 5 Applied Mathematics Letters 5 Complexity 5 Electronic Journal of Statistics 4 Journal of Mathematical Biology 4 Journal of Computational and Applied Mathematics 4 Journal of Time Series Analysis 4 Applied Mathematical Modelling 4 International Journal of Computer Mathematics 4 Communications in Nonlinear Science and Numerical Simulation 4 Journal of Nonlinear Science and Applications 3 Annals of the Institute of Statistical Mathematics 3 Journal of Multivariate Analysis 3 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 3 International Journal of Approximate Reasoning 3 Journal of Dynamics and Differential Equations 3 Computational and Applied Mathematics 3 Abstract and Applied Analysis 3 Statistical Inference for Stochastic Processes 3 Journal of Applied Mathematics 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Communications on Pure and Applied Analysis 3 Boundary Value Problems 3 Journal of Mathematics 2 Applicable Analysis 2 Journal of the Franklin Institute 2 Letters in Mathematical Physics 2 Mathematical Biosciences 2 Fuzzy Sets and Systems 2 Journal of Econometrics 2 Topology and its Applications 2 Acta Applicandae Mathematicae 2 Communications in Statistics. Theory and Methods 2 Linear Algebra and its Applications 2 Stochastic Processes and their Applications 2 Journal of Nonlinear Science 2 Statistical Papers 2 Journal of Difference Equations and Applications 2 Boletín de la Sociedad Matemática Mexicana. Third Series 2 Bernoulli 2 Science in China. Series E 2 Taiwanese Journal of Mathematics 2 Journal of Inequalities and Applications 2 Qualitative Theory of Dynamical Systems 2 Dynamical Systems 2 Computational & Mathematical Methods in Medicine 2 Science China. Mathematics 2 Journal of the Korean Society for Industrial and Applied Mathematics 2 Journal of Applied Analysis and Computation 2 Cogent Mathematics & Statistics 1 International Journal of Modern Physics B 1 Communications in Algebra 1 Discrete Mathematics 1 International Journal of Control 1 Indian Journal of Pure & Applied Mathematics 1 Mathematical Methods in the Applied Sciences 1 Metrika 1 Nonlinearity 1 Physica A 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Theory of Probability and its Applications 1 The Annals of Statistics 1 Automatica 1 Journal of Algebra 1 Journal of Statistical Planning and Inference 1 Kybernetika 1 Mathematische Zeitschrift 1 Proceedings of the American Mathematical Society 1 Ricerche di Matematica 1 SIAM Journal on Control and Optimization 1 Studies in Applied Mathematics 1 Theoretical Computer Science 1 Insurance Mathematics & Economics 1 Applied Mathematics and Mechanics. (English Edition) 1 Physica D 1 Applied Numerical Mathematics 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Complexity 1 Formal Aspects of Computing 1 Sugaku Expositions 1 Neural Computation 1 Discrete Event Dynamic Systems 1 Computational Statistics 1 Communications in Statistics. Simulation and Computation ...and 49 more Serials
all top 5
### Cited in 33 Fields
253 Biology and other natural sciences (92-XX) 143 Ordinary differential equations (34-XX) 79 Partial differential equations (35-XX) 62 Dynamical systems and ergodic theory (37-XX) 59 Probability theory and stochastic processes (60-XX) 55 Statistics (62-XX) 31 Numerical analysis (65-XX) 26 Systems theory; control (93-XX) 25 Combinatorics (05-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 9 Difference and functional equations (39-XX) 7 Quantum theory (81-XX) 6 Operator theory (47-XX) 6 Computer science (68-XX) 6 Information and communication theory, circuits (94-XX) 5 Linear and multilinear algebra; matrix theory (15-XX) 4 Algebraic geometry (14-XX) 4 Real functions (26-XX) 4 Operations research, mathematical programming (90-XX) 3 Order, lattices, ordered algebraic structures (06-XX) 3 Integral equations (45-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 2 Nonassociative rings and algebras (17-XX) 2 Approximations and expansions (41-XX) 2 Functional analysis (46-XX) 2 General topology (54-XX) 2 Geophysics (86-XX) 1 Field theory and polynomials (12-XX) 1 $$K$$-theory (19-XX) 1 Measure and integration (28-XX) 1 Mechanics of particles and systems (70-XX) 1 Fluid mechanics (76-XX) 1 Statistical mechanics, structure of matter (82-XX) | 2022-05-18 19:49:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.742712140083313, "perplexity": 13617.51144679512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00096.warc.gz"} |
http://zjhomeworkmiqc.artsales.biz/thesis-preamble-latex.html | # Thesis preamble latex
Latex/document structure scientific reports (bachelor report, master thesis, dissertation) letters {} and \begin {document} is called the preamble. Bham-latex-preamble this project holds the latex preamble file of my phd thesis, for submission at the university of birmingham it is a work in progress. Dissertation theory phd thesis latex preamble how much do essay writers get paid thesis online writers. The next step to prepare the thesis template is writing some additional command in latex preamble, accessible from the menu document- settings, under latex preamble. Carlosefonseca / thesis code issues 1 thesis_preambletex % % tex master: thesistex the last step is to invoke latex on the tex file once.
Literary analysis essay catcher rye master thesis preamble cover letters day of the weekmaster thesis preamble master thesis preamble good latex preambles for. Directory macros/latex/contrib/psu-thesis which has since been supplanted by the committee page the \includesignature command in the preamble produces the. Honors thesis wsu if icry for latex thesis preamble you when embarking on a par with readers green, a according to the level and standard.
Do my essays for me master thesis preamble how to write a high school resume for college application essays paper. Dissertation only distance phd phd thesis preamble phd thesis on organizational development doctoral dissertation abstract.
Good latex preambles for math thesis in general, i suggest to start with a preamble that is as small as possible and to add packages and/or custom commands as the need for them arises with regard to your code sample: don't fiddle with \topmargin, \textheight and similar commands. This project holds the latex preamble file of my phd thesis, for submission at the university of birmingham it is a work in progresscarlosefonseca / thesis.
Now we can finish off the preamble by filling in the title, author and date information to create the simplest title page we can add the thesis title, institution. The next step to prepare the thesis template is writing some additional command in latex preamble, accessible from the menuphd thesis with multiple preambles. How to write a thesis in latex pt 1 - basic structure how to write a basic thesis using latex making up the main body of the thesis ##the preamble in.
Thesis preamble latex
Rated 3/5 based on 22 review | 2018-10-23 05:34:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587243795394897, "perplexity": 4648.37981197293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00553.warc.gz"} |
http://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless | # Is normality testing 'essentially useless'?
A former colleague once argued to me as follows:
"we usually apply normality tests to the results of processes that, under the null, generate random variables that are only asymptotically or nearly normal (with the 'asymptotically' part dependent on some quantity which we cannot make large); In the era of cheap memory, big data, and fast processors, normality tests should always reject the null of normal distribution for large (though not insanely large) samples. And so, perversely, normality tests should only be used for small samples, when they presumably have lower power and less control over type I rate."
Is this a valid argument? Is this a well-known argument? Are there well known tests for a 'fuzzier' null hypothesis than normality?
-
For reference: I don't think that this needed to be community wiki. – Shane Sep 8 '10 at 17:57
I wasn't sure there was a 'right answer'... – shabbychef Sep 8 '10 at 18:01
– Shane Sep 8 '10 at 18:03
In a certain sense, this is true of all test of a finite number of parameters. With $k$ fixed (the number of parameters on which the test is caried) and $n$ growthing without bounds, any difference between the two groups (no matter how small) will always break the null at some point. Actually, this is an argument in favor of bayesian tests. – user603 Sep 8 '10 at 18:07
The thread at "Are large datasets inappropriate for hypothesis testing" discusses a generalization of this question. (stats.stackexchange.com/questions/2516/… ) – whuber Sep 9 '10 at 20:17
It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness...
Let me illustrate with the Shapiro-Wilks test. If you construct an almost-normal distribution and do a small simulation, you get more or less following results in R :
x <- replicate(100,{ # generates 100 different tests on each distribution
c(
shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value,
shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value
)
} # rnorm gives a random draw from the normal distribution
)
rownames(x)<-c("n10","n100","n1000","n5000")
rowMeans(x<0.05) # the proportion of significant deviations
n10 n100 n1000 n5000
0.04 0.04 0.20 0.87
so in 87% of the cases, the last distribution is not seen any more as a normal distribution. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples
with p-values
n10 n100 n1000 n5000
0.760 0.681 0.164 0.007
-
On a side note, the central limit theorem makes the formal normality check unnecessary in many cases when n is large. – Joris Meys Sep 8 '10 at 23:19
yes, the real question is not whether the data are actually distributed normally but are they sufficiently normal for the underlying assumption of normality to be reasonable for the practical purpose of the analysis, and I would have thought the CLT based argument is normally [sic] sufficient for that. – Dikran Marsupial Sep 9 '10 at 9:37
@joris-meys the central limit theorem does not help unless the population standard deviation is known. Very tiny disturbances in the random variable can distort the sample variance and make the distribution of a test statistic very far from the $t$ distribution, as shown by Rand Wilcox. – Frank Harrell Aug 1 '13 at 11:42
This answer appears not to address the question: it merely demonstrates that the S-W test does not achieve its nominal confidence level, and so it identifies a flaw in that test (or at least in the R implementation of it). But that's all--it has no bearing on the scope of usefulness of normality testing in general. The initial assertion that normality tests always reject on large sample sizes is simply incorrect. – whuber Oct 24 '13 at 21:16
Not one real life distribution is perfectly normal. So with large enough samples, all normality test should reject the null. So yes, SW does what it needs to do. But it is worthless for applied statistics. There's no point in going to eg a Wilcoxon when having a sample size of 5000 and an almost normal distribution. And that's what OP's remark was all about: does it make sense to test for normality when having large sample sizes? Answer: no. Why? because you detect (correctly) a deviation that doesn't matter for your analysis. As pointed out by the QQ plots – Joris Meys Oct 25 '13 at 16:03
When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand the question the normality test answers.
The question normality tests answer: Is there convincing evidence of any deviation from the Gaussian ideal? With moderately large real data sets, the answer is almost always yes.
The question scientists often expect the normality test to answer: Do the data deviate enough from the Gaussian ideal to "forbid" use of a test that assumes a Gaussian distribution? Scientists often want the normality test to be the referee that decides when to abandon conventional (ANOVA, etc.) tests and instead analyze transformed data or use a rank-based nonparametric test or a resampling or bootstrap approach. For this purpose, normality tests are not very useful.
-
+1 for a good and informative answer. I find it useful to see a good explanation for a common misunderstanding (which I have incidentally been experiencing myself: stats.stackexchange.com/questions/7022/…). What I miss though, is an alternative solution to this common misunderstanding. I mean, if normality tests are the wrong way to go, how does one go about checking if a normal approximation is acceptable/justified? – posdef Feb 10 '11 at 12:45
There's is not substitute for the (common) sense of the analyst (or, well, the researcher/scientist). And experience (learnt by trying and seeing: what conclusions do I get if I assume it is normal? What are the difference if not?). Graphics are your best friends. – FairMiles Apr 5 '13 at 15:33
I like this paper, which makes the point you made: Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105(1), 156-166. – Jeremy Miles Aug 20 at 20:18
I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Shapiro-Wilk, Anderson-Darling and Jarque-Bera tests never should be used.
Before I explain my standpoint, let me make a few remarks:
• In an interesting recent paper Rochon et al. studied the impact of the Shapiro-Wilk test on the two-sample t-test. The two-step procedure of testing for normality before carrying out for instance a t-test is not without problems. Then again, neither is the two-step procedure of graphically investigating normality before carrying out a t-test. The difference is that the impact of the latter is much more difficult to investigate (as it would require a statistician to graphically investigate normality $100,000$ or so times...).
• It is useful to quantify non-normality, for instance by computing the sample skewness, even if you don't want to perform a formal test.
• Multivariate normality can be difficult to assess graphically and convergence to asymptotic distributions can be slow for multivariate statistics. Tests for normality are therefore more useful in a multivariate setting.
• Tests for normality are perhaps especially useful for practitioners who use statistics as a set of black-box methods. When normality is rejected, the practitioner should be alarmed and, rather than carrying out a standard procedure based on the assumption of normality, consider using a nonparametric procedure, applying a transformation or consulting a more experienced statistician.
• As has been pointed out by others, if $n$ is large enough, the CLT usually saves the day. However, what is "large enough" differs for different classes of distributions.
(In my definiton) a test for normality is directed directed against a class of alternatives if it is sensitive to alternatives from that class, but not sensitive to alternatives from other classes. Typical examples are tests that are directed towards skew or kurtotic alternatives. The simplest examples use the sample skewness and kurtosis as test statistics.
Directed tests of normality are arguably often preferable to omnibus tests (such as the Shapiro-Wilk and Jarque-Bera tests) since it is common that only some types of non-normality are of concern for a particular inferential procedure.
Let's consider Student's t-test as an example. Assume that we have an i.i.d. sample from a distribution with skewness $\gamma=\frac{E(X-\mu)^3}{\sigma^3}$ and (excess) kurtosis $\kappa=\frac{E(X-\mu)^4}{\sigma^4}-3.$ If $X$ is symmetric about its mean, $\gamma=0$. Both $\gamma$ and $\kappa$ are 0 for the normal distribution.
Under regularity assumptions, we obtain the following asymptotic expansion for the cdf of the test statistic $T_n$: $$P(T_n\leq x)=\Phi(x)+n^{-1/2}\frac{1}{6}\gamma(2x^2+1)\phi(x)-n^{-1}x\Big(\frac{1}{12}\kappa (x^2-3)-\frac{1}{18}\gamma^2(x^4+2x^2-3)-\frac{1}{4}(x^2+3)\Big)\phi(x)+o(n^{-1}),$$
where $\Phi(\cdot)$ is the cdf and $\phi(\cdot)$ is the pdf of the standard normal distribution.
$\gamma$ appears for the first time in the $n^{-1/2}$ term, whereas $\kappa$ appears in the $n^{-1}$ term. The asymptotic performance of $T_n$ is much more sensitive to deviations from normality in the form of skewness than in the form of kurtosis.
It can be verified using simulations that this is true for small $n$ as well. Thus Student's t-test is sensitive to skewness but relatively robust against heavy tails, and it is reasonable to use a test for normality that is directed towards skew alternatives before applying the t-test.
As a rule of thumb (not a law of nature), inference about means is sensitive to skewness and inference about variances is sensitive to kurtosis.
Using a directed test for normality has the benefit of getting higher power against ''dangerous'' alternatives and lower power against alternatives that are less ''dangerous'', meaning that we are less likely to reject normality because of deviations from normality that won't affect the performance of our inferential procedure. The non-normality is quantified in a way that is relevant to the problem at hand. This is not always easy to do graphically.
As $n$ gets larger, skewness and kurtosis become less important - and directed tests are likely to detect if these quantities deviate from 0 even by a small amount. In such cases, it seems reasonable to, for instance, test whether $|\gamma|\leq 1$ or (looking at the first term of the expansion above) $$|n^{-1/2}\frac{1}{6}\gamma(2z_{\alpha/2}^2+1)\phi(z_{\alpha/2})|\leq 0.01$$ rather than whether $\gamma=0$. This takes care of some of the problems that we otherwise face as $n$ gets larger.
-
Now this is a great answer! – user603 Apr 4 at 10:45
Yea this should be the accepted, really fantastic answer – noah Apr 14 at 19:24
IMHO normality tests are absolutely useless for the following reasons:
1. On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normality test isn't powerful to pick it up.
2. On large samples, things like the T-test and ANOVA are pretty robust to non-normality.
3. The whole idea of a normally distributed population is just a convenient mathematical approximation anyhow. None of the quantities typically dealt with statistically could plausibly have distributions with a support of all real numbers. For example, people can't have a negative height. Something can't have negative mass or more mass than there is in the universe. Therefore, it's safe to say that nothing is exactly normally distributed in the real world.
-
Electrical potential difference is an example of a real-world quantity that can be negative. – nico Sep 19 '10 at 13:03
@nico: Sure it can be negative, but there's some finite limit to it because there are only so many protons and electrons in the Universe. Of course this is irrelevant in practice, but that's my point. Nothing is exactly normally distributed (the model is wrong), but there are lots of things that are close enough (the model is useful). Basically, you already knew the model was wrong, and rejecting or not rejecting the null gives essentially no information about whether it's nonetheless useful. – dsimcha Sep 22 '10 at 19:39
@dsimcha - I find that a really insightful, useful response. – rolando2 May 4 '12 at 21:34
@dsimcha, the $t$-test and ANOVA are not robust to non-normality. See papers by Rand Wilcox. – Frank Harrell Aug 1 '13 at 11:45
I think that pre-testing for normality (which includes informal assessments using graphics) misses the point.
1. Users of this approach assume that the normality assessment has in effect a power near 1.0.
2. Nonparametric tests such as the Wilcoxon, Spearman, and Kruskal-Wallis have efficiency of 0.95 if normality holds.
3. In view of 2. one can pre-specify the use of a nonparametric test if one even entertains the possibility that the data may not arise from a normal distribution.
4. Ordinal cumulative probability models (the proportional odds model being a member of this class) generalize standard nonparametric tests. Ordinal models are completely transformation-invariant with respect to $Y$, are robust, powerful, and allow estimation of quantiles and mean of $Y$.
-
Let me add one small thing:
Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error.
You shall never forget that each additional test does this as long as you don't control for alpha-error accumulation. Hence, another good reason to dismiss normality testing.
-
I presume you are referring to a situation where one first does a normality test, and then uses the result of that test to decide which test to perform next. – Harvey Motulsky Sep 9 '10 at 16:07
I refer to the general utility of normality tests when used as method to determine whether or not it is appropriate to use a certain method. If you apply them in these cases, it is, in terms of probability of committing an alpha error, better to perform a more robust test to avoid the alpha error accumulation. – Henrik Sep 10 '10 at 10:42
This does not make sense to me. Even if you decide between, say, an ANOVA or a rank-based method based on a test of normality (a bad idea of course), at the end of the day you would still only perform one test of the comparison of interest. If you reject normality erroneously, you still haven't reached a wrong conclusion regarding this particular comparison. You might be performing two tests but the only case in which you can conclude that factor such-and-such have an effect is when the second test also rejects $H_0$, not when only the first one does. Hence, no alpha-error accumulation… – Gala Jun 8 '13 at 11:24
Another way a normality test could increase type I errors is if we're talking about "overall probability of performing an alpha-error." The test itself has an error rate, so overall, our probability of committing an error increases. Emphasis on one small thing too I suppose... – Nick Stauner Nov 8 '13 at 15:49
@NickStauner That is exactly what I wanted to convey. Thanks for making this point even clearer. – Henrik Nov 9 '13 at 12:25
Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?"
For example, if you only want to put a confidence limit around the mean of a set of data, departures from normality may or not be important, depending on how much data you have and how big the departures are. However, departures from normality are apt to be crucial if you want to predict what the most extreme value will be in future observations or in the population you have sampled from.
-
The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between using a parametric versus a nonparametric test for my inference procedure. I think the test can be useful in moderate and large samples (when the central limit theorem does not come into play). I tend to use Wilk-Shapiro or Anderson-Darling tests but running SAS I get them all and they generally agree pretty well. On a different note I think that graphical procedures such as Q-Q plots work equally well. The advantage of a formal test is that it is objective. In small samples it is true that these goodness of fit tests have practically no power and that makes intuitive sense because a small sample from a normal distribution might by chance look rather non normal and that is accounted for in the test. Also high skewness and kurtosis that distinguish many non normal distributions from nomrla distribution are not easily seen in small samples.
-
While it certainly can be used that way, I don't think you will be more objective than with a QQ-Plot. The subjective part with the tests is when to decide that your data is to non-normal. With a large sample rejecting at p=0.05 might very well be excessive. – Erik May 4 '12 at 17:56
Pre-testing (as suggested here) can invalidate the Type I error rate of the overall process; one should take into account the fact that a pre-test was done when interpreting the results of whichever test it selected. More generally, hypothesis tests should be kept for testing null hypothesis one actually cares about, i.e. that there is no association between variables. The null hypothesis that the data is exactly Normal doesn't fall into this category. – guest May 4 '12 at 18:02
(+1) There is excellent advice here. Erik, the use of "objective" took me aback too, until I realized Michael's right: two people correctly conducting the same test on the same data will always get the same p-value, but they might interpret the same Q-Q plot differently. Guest: thank you for the cautionary note about Type I error. But why should we not care about the data distribution? Frequently that is interesting and valuable information. I at least want to know whether the data are consistent with the assumptions my tests are making about them! – whuber May 4 '12 at 18:25
I strongly disagree. Both people get the same QQ-plot and same the p-value. To interpret the p-value you need to take into account the sample size and the violations of normality your test is particular sensitive to. So deciding what to do with your p-value is just as subjective. The reason you might prefer the p-value is that you believe the data could follow a perfect normal distribution - else it is just a question how quickly the p-value falls with sample size. Which is more, given a decent sample size the QQ-plot looks pretty much the same and remains stable with more samples. – Erik May 4 '12 at 20:30
Erik, I agree that test results and graphics require interpretation. But the test result is a number and there won't be any dispute about it. The QQ plot, however, admits of multiple descriptions. Although each may objectively be correct, the choice of what to pay attention to is...a choice. That's what "subjective" means: the result depends on the analyst, not just the procedure itself. This is why, for instance, in settings as varied as control charts and government regulations where "objectivity" is important, criteria are based on numerical tests and never graphical results. – whuber May 4 '12 at 21:54
I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to see deviations of about the same Magnitude. Also, because the normal distribution has just two sufficient statistics, it is insensitive to changes in the data which do not alter these quantities. So in a sense you can think of a normal distribution as an "average" over all possible distributions with the same first and second moments. this provides one reason why least squares should work as well as it does.
-
I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critical value for the Kolmogorov-Smirnov test is based on F being completely sppecified. It can be modified to test against a parametric distribution with parameters estimated. So if fuzzier means estimating more than two parameters then the answer to the question is yes. These tests can be applied the 3 parameter families or more. Some tests are designed to have better power when testing against a specific family of distributions. For example when testing normality the Anderson-Darling or the Shapiro-Wilk test have greater power than K-S or chi square when the null hypothesized distribution is normal. Lillefors devised a test that is preferred for exponential distributions.
-
Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test essentially "rewards" for small and fuzzy data sets and "rewards" for a lack of evidence. Something like qq plots are much more useful. The desire for hard numbers to decide things like this always (yes/no normal/not normal) misses that modeling is partially an art and how hypotheses are actually supported.
-
samples are generally small in the sense of big populations that may be frequently dispersed. Moreover, we must ensure a correct identification of target population. Even a small sample is good if we are getting data through properly conducted controlled experiments. – subhash c. davar Dec 14 '13 at 3:55
It remains that a large sample that is nearly normal will have a low p-value while a smaller sample that is not nearly as normal will often not. I do not think that large p-values are useful. Again, they reward for a lack of evidence. I can have a sample with several million data points, and it will nearly always reject the normality assumption under these tests while a smaller sample will not. Therefore, I find them not useful. If my thinking is flawed please show it using some deductive reasoning on this point. – wvguy8258 Jul 9 at 7:43
One good use of normality test that I don't think has been mentioned is to determine whether using z-scores is okay. Let's say you selected a random sample from a population, and you wish to find the probability of selecting one random individual from the population and get a value of 80 or higher. This can be done only if the distribution is normal, because to use z-scores, the assumption is that the population distribution is normal.
But then I guess I can see this being arguable too...
-
Value of what? Mean, sum, variance, an individual observation? Only the last one relies on the assumed normality of the distribution. – whuber Sep 29 '13 at 16:12
i meant individual – Hotaka Sep 29 '13 at 16:29
Thanks. Your answer remains so vague, though, that it is difficult to tell what procedures you are referring to and impossible to assess whether your conclusions are valid. – whuber Sep 29 '13 at 16:33
The problem with this use is the same as with other uses: The test will be dependent on sample size, so, it's essentially useless. It doesn't tell you whether you can use z scores. – Peter Flom May 31 at 0:24 | 2014-10-21 00:32:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7317670583724976, "perplexity": 729.9065958976608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443598.37/warc/CC-MAIN-20141017005723-00368-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/tags/work/new | # Tag Info
1
This is a question everyone asks at first because it intuitively seems like a contradiction. However, it is not. Conceptual Examples I think you are not far off but perhaps the third law is the one tripping you up, not the 1st... But anyway, here are some conceptual examples, which might help... Example 1. Consider the particle in the frame for a ...
21
In introductory problems about work you're normally taught that it's force times distance: $$W = F \times x$$ and you treat the force as constant. If you look at the problem this way then you're quite correct that if the force is $F = mg$ then the box can't accelerate so it can't move. However a more complete way to define the work is: $$W = ... 3 Newton's first law states that: "An object at rest will remain at rest unless acted on by an unbalanced force. An object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force." It is often called the law of inertia. So if You want to move an object with zero velocity, at first moment You have ... 1 external force acting on the box by us should be a little more than the weight, otherwise indeed: no acceleration! So the mgh is really a lower limit. We need to accelerate. And, by the time we reach h, decelerate -- so we get this extra little bit of work back. But only in a physics sense. 1 Work is described by the following formula:$$W=\vec{F} \cdot \vec{s} = F\cdot s \cdot \cos \alpha$$with s is the distance, 40 m in this case, and \alpha the angle between \vec{F} and \vec{s}, which in this case happens to be zero, giving W = F \cdot s F is force, in this case work is done against gravitational force, so$$F=mg$$m is the ... -1 Since the work done must be W=F\cdot d And we need to calculate just the distance, because we already have the force. x(t)=x_0+v_ot+\frac{1}{2}at^2=\frac{1}{2}at^2 Finally we reemplace that in the fist equation to find the work done. W=F\cdot\frac{1}{2}at^2 0 The friction acts only when the contact point slips relative to ground. You can consider speed of lowest point to be sum of v and \omega r with proper directions. Friction acts till there is slipping and condition for no slipping is v=\omega r when v is right and \omega is clockwise acc. to diagram As in first case, the lowest point in always ... 0 1st que- as the point in contact of the disk and the plane is not moving, no work is done. and as for the second question; no body is perfectly rigid, therefore due to deformation there is loss in energy. that is why the disk stops. 0 Power is, in words, the rate at which work is done and work done equals the amount of energy converted from one form to another. For electric circuits, the power associated with a circuit element is the product of the voltage across and the current through. We can verify this via dimensional analysis:$$v \cdot i = \frac{J}{C}\cdot \frac{C}{s} = ...
0
Starting from the definition of power $$P=\frac{dW}{dt}$$ We can solve for the work (after integrating). The thing to know is what power is in terms of current and voltage or resistance $$P=IV=I^2R=V^2/R$$ Clearly, $P=IV$ is what we want to use. Last thing is what is $V$ for a inductor? It is $V=L\frac{dI}{dt}$ From here I will let you (and future ...
1
If this were a mechanical pump, then the work done per unit volume pumped would be: $$W = VP$$ where $V$ is the volume pumped and $P$ is the pressure increase across the pump. According to Wikipedia the volume pumped per heartbeat (at 72 bpm - does it change with pulse rate?) is about $70$ cubic centimetres per beat, which is $7 \times 10^{-5}$ m$^3$. ...
0
Let me try to attempt to clarify in some direction that I feel comfortable. From usual mechanics, you know the definition of work is given by : $$W = \int \bf{F}.\bf{ds}$$ for simplicity considering the work done to be in the same direction as displacement $$W = \int {F}{ds}$$ Now this is just one version of defining work. In generalising, we redefine ...
0
It may just be the wording of your text that provides the confusion. Work is the same in thermodynamics as it is in mechanics. Work is an energy transfer process. In thermodynamics this often equates to energy in the forms of pressure, temperature, volume, etc. that is transformed into energy associated with position or movement (i.e. kinetic and potential ...
2
You may think you are not moving when you plank, but your body maintains plank by pushing you back up imperceptibly after you droop imperceptibly. Your muscles do work against gravity. I can't imagine how to estimate how much.
2
You have the right idea, but the question asks for situations where there is a force and no work. Centripetal force does do no work but there is a force, so I is true. In III you are exactly right, but it says there is a force and no work, which falls under the question. I think you have misunderstood the question. II is false because a force in the ...
0
You have the ideas down, I think, but for some reason, you seem to have not yet noticed the disagreement between your thoughts. For instance, if you've already stated that $W = fd$, and that you're looking for examples of measurable force with no work, what would that tell you about $f*d$? So if work is $f*d$, then in order to have work, you must have some ...
5
Your teacher's explanation is incorrect. A simple counterexample can be constructed to illustrate this by considering what happens when the role of your arm is replaced by that of a rubber band. When a weight is suspended from the ceiling by a rubber band, the band stretches and its polymer chains become more ordered, in exact analogy to your teachers ...
5
Think about the work-kinetic energy theorem, which states that the net work done on an object is equal to its change in kinetic energy: $$W_{net}=\Delta\mathrm{KE}.$$ You are right that when lifting an object of mass $m$ by a height $h,$ in a uniform gravitational field, the work you do is $W_{you}=mgh$ (assuming, as you said, that you're applying a force ...
3
The electrostatic potential energy of a system of point charges is defined as the work required to be done to bring the charges constituting the system to their respective locations from infinity. Suppose we have a configuration of point charges. If the potential of the energy of the system is negative, this means work is positive. Consider two point ...
1
With work you also have to take care in specifying what is doing the work. If it is the work required to move a particle or the work done by the electric field, these have different signs. If the work done by the electric field on a point charge is positive it means it is moving in the direction of the electric force acting on the point charge, therefore, $W ... 2 Rule of thumb for working it out: If you imagine letting a charge go, the direction it tends to move is toward lower potential energy. The opposite direction is toward higher potential energy. This is independent of the choice of where the zero of energy is. 1 Have to be a bit careful with potential energies, as the 0 point of potential can be arbitrarily chosen. Only changes in potential are well defined without a choice of 0 point. That said, it is often convenient to choose the 0 point at$\infty$, and this is the typical choice when talking about assembling point charges. With this choice, the potential ... 0 After some thinking, I got the answer. When you decrease the ambient pressure from$p_i$to$p_{atm}$, the system goes out of equilibrium, and so the expansion of the gas is not quasi-static. So, all the state variables like pressure and volume aren't defined. So we can't use them for calculating the work done in definition 2. 4 Good question, and the answer is that$W' > W$if$F > mg$. The reason for this is that if$F = mg$then the net force is zero so the particle travels at a constant velocity. That means it's kinetic energy hasn't changed so the only change is the potential energy. However if$F > mg\$ then the net force is positive and the particle is accelerating ...
Top 50 recent answers are included | 2014-03-12 18:13:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078275322914124, "perplexity": 249.88758307084146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023864525/warc/CC-MAIN-20140305125104-00039-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.maths.usyd.edu.au/s/scnitm/seangard-SUMS-White-CounterfeitCoi | SMS scnews item created by Sean Gardiner at Tue 24 Mar 2015 1726
Type: Seminar
Distribution: World
Expiry: 27 Mar 2015
Calendar1: 26 Mar 2015 1300-1400
CalLoc1: New Law 104
Auth: seangard@cpe-58-173-10-43.ecxf2.cht.bigpond.net.au (sgar9702) in SMS-WASM
# SUMS: White -- Counterfeit Coin Conundrums
This week’s SUMS talk is being given by PhD graduand Gareth White.
Abstract:
A classic problem states: Given twelve coins, with 11 identical coins and one
counterfeit coin with a different weight (otherwise identical), as well as a balance
scale, how can you identify the counterfeit coin using only 3 weighings? I will be
discussing this problem, as well as generalisations (such as determining the maximum
number of coins that will allow us to identify a counterfeit coin in k weighings, etc.),
and discuss how this relates to things such as Hamming Codes and Information Theory.
Actions: | 2022-11-28 17:24:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24552084505558014, "perplexity": 6601.397782559861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00165.warc.gz"} |
https://demo.formulasearchengine.com/wiki/Partial_fraction_decomposition | # Partial fraction decomposition
Template:More footnotes In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is a fraction such that the numerator and the denominator are both polynomials) is the operation that consists in expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.
The importance of the partial fraction decomposition lies in the fact that it provides an algorithm for computing the antiderivative of a rational function.
In symbols, one can use partial fraction expansion to change a rational fraction in the form
${\displaystyle {\frac {f(x)}{g(x)}}}$
where ƒ and g are polynomials, into an expression of the form
${\displaystyle \sum _{j}{\frac {f_{j}(x)}{g_{j}(x)}}}$
where gj (x) are polynomials that are factors of g(x), and are in general of lower degree. Thus, the partial fraction decomposition may be seen as the inverse procedure of the more elementary operation of addition of rational fractions, which produces a single rational fraction with a numerator and denominator usually of high degree. The full decomposition pushes the reduction as far as it will go: in other words, the factorization of g is used as much as possible. Thus, the outcome of a full partial fraction expansion expresses that fraction as a sum of fractions, where:
• the denominator of each term is a power of an irreducible (not factorable) polynomial and
• the numerator is a polynomial of smaller degree than that irreducible polynomial. To decrease the degree of the numerator directly, the Euclidean division can be used, but in fact if ƒ already has lower degree than g this isn't helpful.
## Basic principles
Assume a rational function ${\displaystyle R(x)={\frac {f(x)}{g(x)}}}$ in one indeterminate x has a denominator that factors as
${\displaystyle g(x)=P(x)\cdot Q(x)\,}$
over a field K (we can take this to be real numbers, or complex numbers). Assume further that P and Q have no common factor. By Bézout's identity for polynomials, there exist polynomials C(x) and D(x) such that
${\displaystyle CP+DQ=1}$
Thus ${\displaystyle {\frac {1}{g(x)}}={\frac {CP+DQ}{PQ}}={\frac {C}{Q}}+{\frac {D}{P}}}$ and hence R may be written as
${\displaystyle R={\frac {f(x)}{g(x)}}={\frac {Df(x)}{P}}+{\frac {Cf(x)}{Q}},}$
where all numerators are polynomials.
Using this idea inductively we can write R(x) as a sum with denominators powers of irreducible polynomials. To take this further, if required, write:
${\displaystyle {\frac {G(x)}{F(x)^{n}}}}$
as a sum with denominators powers of F and numerators of degree less than F, plus a possible extra polynomial. This can be done by the Euclidean algorithm, polynomial case. The result is the following theorem:
Let ƒ and g be nonzero polynomials over a field K. Write g as a product of powers of distinct irreducible polynomials :
${\displaystyle g=\prod _{i=1}^{k}p_{i}^{n_{i}}.}$
There are (unique) polynomials b and aij with deg aij < deg pi such that
${\displaystyle {\frac {f}{g}}=b+\sum _{i=1}^{k}\sum _{j=1}^{n_{i}}{\frac {a_{ij}}{p_{i}^{j}}}.}$
If deg ƒ < deg g, then b = 0.
Therefore, when the field K is the complex numbers, we can assume that each pi has degree 1 (by the fundamental theorem of algebra) the numerators will be constant. When K is the real numbers, some of the pi might be quadratic, so, in the partial fraction decomposition, a quotient of a linear polynomial by a power of a quadratic might occur.
In the preceding theorem, one may replace "distinct irreducible polynomials" by "pairwise coprime polynomials that are coprime with their derivative". For example, the pi may be the factors of the square-free factorization of g. When K is the field of the rational numbers, as it is typically the case in computer algebra, this allows to replace factorization by greatest common divisor to compute the partial fraction decomposition.
## Application to symbolic integration
For the purpose of symbolic integration, the preceding result may be refined into
Let ƒ and g be nonzero polynomials over a field K. Write g as a product of powers of pairwise coprime polynomials which have no multiple root in an algebraically closed field:
${\displaystyle g=\prod _{i=1}^{k}p_{i}^{n_{i}}.}$
There are (unique) polynomials b and cij with deg cij < deg pi such that
${\displaystyle {\frac {f}{g}}=b+\sum _{i=1}^{k}\sum _{j=2}^{n_{i}}\left({\frac {c_{ij}}{p_{i}^{j-1}}}\right)'+\sum _{i=1}^{k}{\frac {c_{i1}}{p_{i}}}.}$
where ${\displaystyle X'}$ denotes the derivative of ${\displaystyle X.}$
This reduces the computation of the antiderivative of a rational function to the integration of the last sum, with is called the logarithmic part, because its antiderivative is a linear combination of logarithms. In fact, we have
${\displaystyle {\frac {c_{i1}}{p_{i}}}=\sum _{\alpha _{j}:p_{i}(\alpha _{j})=0}{\frac {c_{i1}(\alpha _{j})}{p'_{i}(\alpha _{j})}}{\frac {1}{x-\alpha _{j}}}.}$
There are various methods to compute above decomposition. The one that is the simplest to describe is probably the so-called Hermite's method. As the degree of cij is bounded by the degree of pi, and the degree of b is the difference of the degrees of f and g (if this difference is non negative; otherwise, b=0), one may write these unknowns polynomials as polynomials with unknown coefficients. Reducing the two members of above formula to the same denominator and writing that the coefficients of each power of x are the same in the two numerators, one gets a system of linear equations which can be solved to obtain the desired values for the unknowns coefficients.
## Procedure
Given two polynomials ${\displaystyle P(x)}$ and ${\displaystyle Q(x)=(x-\alpha _{1})(x-\alpha _{2})\cdots (x-\alpha _{n})}$, where the αi are distinct constants and deg P < n, partial fractions are generally obtained by supposing that
${\displaystyle {\frac {P(x)}{Q(x)}}={\frac {c_{1}}{x-\alpha _{1}}}+{\frac {c_{2}}{x-\alpha _{2}}}+\cdots +{\frac {c_{n}}{x-\alpha _{n}}}}$
and solving for the ci constants, by substitution, by equating the coefficients of terms involving the powers of x, or otherwise. (This is a variant of the method of undetermined coefficients.)
A more direct computation, which is strongly related with Lagrange interpolation consists in writing
${\displaystyle {\frac {P(x)}{Q(x)}}=\sum _{i=1}^{n}{\frac {P(\alpha _{i})}{Q'(\alpha _{i})}}{\frac {1}{(x-\alpha _{i})}}}$
where ${\displaystyle Q'}$ is the derivative of the polynomial ${\displaystyle Q}$.
This approach does not account for several other cases, but can be modified accordingly:
${\displaystyle {\frac {P(x)}{Q(x)}}=E(x)+{\frac {R(x)}{Q(x)}},}$
and then seek partial fractions for the remainder fraction (which by definition satisfies deg R < deg Q).
• If Q(x) contains factors which are irreducible over the given field, then the numerator N(x) of each partial fraction with such a factor F(x) in the denominator must be sought as a polynomial with deg N < deg F, rather than as a constant. For example, take the following decomposition over R:
${\displaystyle {\frac {x^{2}+1}{(x+2)(x-1)\color {Blue}(x^{2}+x+1)}}={\frac {a}{x+2}}+{\frac {b}{x-1}}+{\frac {\color {OliveGreen}cx+d}{\color {Blue}x^{2}+x+1}}.}$
• Suppose Q(x) = (xα)rS(x) and S(α) ≠ 0. Then Q(x) has a zero α of multiplicity r, and in the partial fraction decomposition, r of the partial fractions will involve the powers of (xα). For illustration, take S(x) = 1 to get the following decomposition:
${\displaystyle {\frac {P(x)}{Q(x)}}={\frac {P(x)}{(x-\alpha )^{r}}}={\frac {c_{1}}{x-\alpha }}+{\frac {c_{2}}{(x-\alpha )^{2}}}+\cdots +{\frac {c_{r}}{(x-\alpha )^{r}}}.}$
### Illustration
In an example application of this procedure, (3x + 5)/(1 − 2x)2 can be decomposed in the form
${\displaystyle {\frac {3x+5}{(1-2x)^{2}}}={\frac {A}{(1-2x)^{2}}}+{\frac {B}{(1-2x)}}.}$
Clearing denominators shows that 3x + 5 = A + B(1 − 2x). Expanding and equating the coefficients of powers of x gives
5 = A + B and 3x = −2Bx
Solving for A and B yields A = 13/2 and B = −3/2. Hence,
${\displaystyle {\frac {3x+5}{(1-2x)^{2}}}={\frac {13/2}{(1-2x)^{2}}}+{\frac {-3/2}{(1-2x)}}.}$
### Residue method
{{#invoke:see also|seealso}} Over the complex numbers, suppose ƒ(x) is a rational proper fraction, and can be decomposed into
${\displaystyle f(x)=\sum _{i}\left({\frac {a_{i1}}{x-x_{i}}}+{\frac {a_{i2}}{(x-x_{i})^{2}}}+\cdots +{\frac {a_{ik_{i}}}{(x-x_{i})^{k_{i}}}}\right).}$
Let
${\displaystyle g_{ij}(x)=(x-x_{i})^{j-1}f(x),}$
then according to the uniqueness of Laurent series, aij is the coefficient of the term (x − xi)−1 in the Laurent expansion of gij(x) about the point xi, i.e., its residue
${\displaystyle a_{ij}=\operatorname {Res} (g_{ij},x_{i}).}$
This is given directly by the formula
${\displaystyle a_{ij}={\frac {1}{(k_{i}-j)!}}\lim _{x\to x_{i}}{\frac {d^{k_{i}-j}}{dx^{k_{i}-j}}}\left((x-x_{i})^{k_{i}}f(x)\right),}$
or in the special case when xi is a simple root,
${\displaystyle a_{i1}={\frac {P(x_{i})}{Q'(x_{i})}},}$
when
${\displaystyle f(x)={\frac {P(x)}{Q(x)}}.}$
Note that P(x) and Q(x) may or may not be polynomials.
## Over the reals
Partial fractions are used in real-variable integral calculus to find real-valued antiderivatives of rational functions. Partial fraction decomposition of real rational functions is also used to find their Inverse Laplace transforms. For applications of partial fraction decomposition over the reals, see
### General result
Let ƒ(x) be any rational function over the real numbers. In other words, suppose there exist real polynomials functions p(x) and q(x)≠ 0, such that
${\displaystyle f(x)={\frac {p(x)}{q(x)}}}$
By dividing both the numerator and the denominator by the leading coefficient of q(x), we may assume without loss of generality that q(x) is monic. By the fundamental theorem of algebra, we can write
${\displaystyle q(x)=(x-a_{1})^{j_{1}}\cdots (x-a_{m})^{j_{m}}(x^{2}+b_{1}x+c_{1})^{k_{1}}\cdots (x^{2}+b_{n}x+c_{n})^{k_{n}}}$
where a1,..., am, b1,..., bn, c1,..., cn are real numbers with bi2 − 4ci < 0, and j1,..., jm, k1,..., kn are positive integers. The terms (xai) are the linear factors of q(x) which correspond to real roots of q(x), and the terms (xi2 + bix + ci) are the irreducible quadratic factors of q(x) which correspond to pairs of complex conjugate roots of q(x).
Then the partial fraction decomposition of ƒ(x) is the following:
${\displaystyle f(x)={\frac {p(x)}{q(x)}}=P(x)+\sum _{i=1}^{m}\sum _{r=1}^{j_{i}}{\frac {A_{ir}}{(x-a_{i})^{r}}}+\sum _{i=1}^{n}\sum _{r=1}^{k_{i}}{\frac {B_{ir}x+C_{ir}}{(x^{2}+b_{i}x+c_{i})^{r}}}}$
Here, P(x) is a (possibly zero) polynomial, and the Air, Bir, and Cir are real constants. There are a number of ways the constants can be found.
The most straightforward method is to multiply through by the common denominator q(x). We then obtain an equation of polynomials whose left-hand side is simply p(x) and whose right-hand side has coefficients which are linear expressions of the constants Air, Bir, and Cir. Since two polynomials are equal if and only if their corresponding coefficients are equal, we can equate the coefficients of like terms. In this way, a system of linear equations is obtained which always has a unique solution. This solution can be found using any of the standard methods of linear algebra. It can also be found with limits (see Example 5).
## Examples
### Example 1
${\displaystyle f(x)={\frac {1}{x^{2}+2x-3}}}$
Here, the denominator splits into two distinct linear factors:
${\displaystyle q(x)=x^{2}+2x-3=(x+3)(x-1)}$
so we have the partial fraction decomposition
${\displaystyle f(x)={\frac {1}{x^{2}+2x-3}}={\frac {A}{x+3}}+{\frac {B}{x-1}}}$
Multiplying through by x2 + 2x − 3, we have the polynomial identity
${\displaystyle 1=A(x-1)+B(x+3)}$
Substituting x = −3 into this equation gives A = −1/4, and substituting x = 1 gives B = 1/4, so that
${\displaystyle f(x)={\frac {1}{x^{2}+2x-3}}={\frac {1}{4}}\left({\frac {-1}{x+3}}+{\frac {1}{x-1}}\right)}$
### Example 2
${\displaystyle f(x)={\frac {x^{3}+16}{x^{3}-4x^{2}+8x}}}$
After long-division, we have
${\displaystyle f(x)=1+{\frac {4x^{2}-8x+16}{x^{3}-4x^{2}+8x}}=1+{\frac {4x^{2}-8x+16}{x(x^{2}-4x+8)}}}$
Since (−4)2 − 4×8 = −16 < 0, the factor x2 − 4x + 8 is irreducible, and the partial fraction decomposition over the reals has the shape
${\displaystyle {\frac {4x^{2}-8x+16}{x(x^{2}-4x+8)}}={\frac {A}{x}}+{\frac {Bx+C}{x^{2}-4x+8}}}$
Multiplying through by x3 − 4x2 + 8x, we have the polynomial identity
${\displaystyle 4x^{2}-8x+16=A(x^{2}-4x+8)+(Bx+C)x}$
Taking x = 0, we see that 16 = 8A, so A = 2. Comparing the x2 coefficients, we see that 4 = A + B = 2 + B, so B = 2. Comparing linear coefficients, we see that −8 = −4A + C = −8 + C, so C = 0. Altogether,
${\displaystyle f(x)=1+2\left({\frac {1}{x}}+{\frac {x}{x^{2}-4x+8}}\right)}$
The following example illustrates almost all the "tricks" one would need to use short of consulting a computer algebra system.
### Example 3
${\displaystyle f(x)={\frac {x^{9}-2x^{6}+2x^{5}-7x^{4}+13x^{3}-11x^{2}+12x-4}{x^{7}-3x^{6}+5x^{5}-7x^{4}+7x^{3}-5x^{2}+3x-1}}}$
After long-division and factoring the denominator, we have
${\displaystyle f(x)=x^{2}+3x+4+{\frac {2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x}{(x-1)^{3}(x^{2}+1)^{2}}}}$
The partial fraction decomposition takes the form
${\displaystyle {\frac {2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x}{(x-1)^{3}(x^{2}+1)^{2}}}={\frac {A}{x-1}}+{\frac {B}{(x-1)^{2}}}+{\frac {C}{(x-1)^{3}}}+{\frac {Dx+E}{x^{2}+1}}+{\frac {Fx+G}{(x^{2}+1)^{2}}}}$
Multiplying through by (x − 1)3(x2 + 1)2 we have the polynomial identity
{\displaystyle {\begin{aligned}&{}\quad 2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x\\&=A(x-1)^{2}(x^{2}+1)^{2}+B(x-1)(x^{2}+1)^{2}+C(x^{2}+1)^{2}+(Dx+E)(x-1)^{3}(x^{2}+1)+(Fx+G)(x-1)^{3}\end{aligned}}}
Taking x = 1 gives 4 = 4C, so C = 1. Similarly, taking x = i gives 2 + 2i = (Fi + G)(2 + 2i), so Fi + G = 1, so F = 0 and G = 1 by equating real and imaginary parts. With C = G = 1 and F = 0, taking x = 0 we get AB + 1 − E − 1 = 0, thus E = AB.
We now have the identity
{\displaystyle {\begin{aligned}&{}2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x\\&=A(x-1)^{2}(x^{2}+1)^{2}+B(x-1)(x^{2}+1)^{2}+(x^{2}+1)^{2}+(Dx+(A-B))(x-1)^{3}(x^{2}+1)+(x-1)^{3}\\&=A((x-1)^{2}(x^{2}+1)^{2}+(x-1)^{3}(x^{2}+1))+B((x-1)(x^{2}+1)-(x-1)^{3}(x^{2}+1))+(x^{2}+1)^{2}+Dx(x-1)^{3}(x^{2}+1)+(x-1)^{3}\end{aligned}}}
Expanding and sorting by exponents of x we get
{\displaystyle {\begin{aligned}&{}2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x\\&=(A+D)x^{6}+(-A-3D)x^{5}+(2B+4D+1)x^{4}+(-2B-4D+1)x^{3}+(-A+2B+3D-1)x^{2}+(A-2B-D+3)x\end{aligned}}}
We can now compare the coefficients and see that
{\displaystyle {\begin{aligned}A+D&=&2\\-A-3D&=&-4\\2B+4D+1&=&5\\-2B-4D+1&=&-3\\-A+2B+3D-1&=&1\\A-2B-D+3&=&3,\end{aligned}}}
with A = 2 − D and −A −3 D =−4 we get A = D = 1 and so B = 0, furthermore is C = 1, E = AB = 1, F = 0 and G = 1.
The partial fraction decomposition of ƒ(x) is thus
${\displaystyle f(x)=x^{2}+3x+4+{\frac {1}{(x-1)}}+{\frac {1}{(x-1)^{3}}}+{\frac {x+1}{x^{2}+1}}+{\frac {1}{(x^{2}+1)^{2}}}.}$
Alternatively, instead of expanding, one can obtain other linear dependences on the coefficients computing some derivatives at x=1 and at x=i in the above polynomial identity. (To this end, recall that the derivative at x=a of (x−a)mp(x) vanishes if m > 1 and it is just p(a) if m=1.) Thus, for instance the first derivative at x=1 gives
${\displaystyle 2\cdot 6-4\cdot 5+5\cdot 4-3\cdot 3+2+3=A\cdot (0+0)+B\cdot (4+0)+8+D\cdot 0}$
that is 8 = 4B + 8 so B=0.
### Example 4 (residue method)
${\displaystyle f(z)={\frac {z^{2}-5}{(z^{2}-1)(z^{2}+1)}}={\frac {z^{2}-5}{(z+1)(z-1)(z+i)(z-i)}}}$
Thus, f(z) can be decomposed into rational functions whose denominators are z+1, z−1, z+i, z−i. Since each term is of power one, −1, 1, −i and i are simple poles.
Hence, the residues associated with each pole, given by
${\displaystyle {\frac {P(z_{i})}{Q'(z_{i})}}={\frac {z_{i}^{2}-5}{4z_{i}^{3}}}}$,
are
${\displaystyle 1,-1,{\tfrac {3i}{2}},-{\tfrac {3i}{2}}}$,
respectively, and
${\displaystyle f(z)={\frac {1}{z+1}}-{\frac {1}{z-1}}+{\frac {3i}{2}}{\frac {1}{z+i}}-{\frac {3i}{2}}{\frac {1}{z-i}}}$.
### Example 5 (limit method)
Limits can be used to find a partial fraction decomposition.[1]
${\displaystyle f(x)={\frac {1}{x^{3}-1}}}$
First, factor the denominator:
${\displaystyle f(x)={\frac {1}{(x-1)(x^{2}+x+1)}}}$
The decomposition takes the form of
${\displaystyle {\frac {1}{(x-1)(x^{2}+x+1)}}={\frac {A}{x-1}}+{\frac {Bx+C}{x^{2}+x+1}}}$
As ${\displaystyle x\to 1}$, the A term dominates, so the right-hand side approaches ${\displaystyle {\frac {A}{x-1}}}$. Thus, we have
${\displaystyle {\frac {1}{(x-1)(x^{2}+x+1)}}={\frac {A}{x-1}}}$
${\displaystyle A=\lim _{x\to 1}{\frac {1}{x^{2}+x+1}}={\frac {1}{3}}}$
As ${\displaystyle x\to \infty }$, the right-hand side is
${\displaystyle \lim _{x\to \infty }{{\frac {A}{x-1}}+{\frac {Bx+C}{x^{2}+x+1}}}={\frac {A}{x}}+{\frac {Bx}{x^{2}}}={\frac {A+B}{x}}.}$
${\displaystyle {\frac {A+B}{x}}=\lim _{x\to \infty }{\frac {1}{x^{3}-1}}=0}$
## The role of the Taylor polynomial
The partial fraction decomposition of a rational function can be related to Taylor's theorem as follows. Let
${\displaystyle P(x),Q(x),A_{1}(x),\dots ,A_{r}(x)}$
be real or complex polynomials; assume that
${\displaystyle \textstyle Q=\prod _{j=1}^{r}(x-\lambda _{j})^{\nu _{j}},}$
that
${\displaystyle \textstyle \deg(P)<\deg(Q)=\sum _{j=1}^{r}\nu _{j},}$
and that
${\displaystyle \textstyle \deg A_{j}<\nu _{j}{\text{ for }}j=1,\dots ,r.}$
Define also
${\displaystyle \textstyle Q_{i}=\prod _{j\neq i}(x-\lambda _{j})^{\nu _{j}}={\frac {Q}{(x-\lambda _{i})^{\nu _{i}}}}{\text{ for }}i=1,\dots ,r.}$
Then we have
${\displaystyle {\frac {P}{Q}}=\sum _{j=1}^{r}{\frac {A_{j}}{(x-\lambda _{j})^{\nu _{j}}}}}$
${\displaystyle A_{i}(x):=\sum _{k=0}^{\nu _{i}-1}{\frac {1}{k!}}\left({\frac {P}{Q_{i}}}\right)^{(k)}(\lambda _{i})\ (x-\lambda _{i})^{k}.}$
Taylor's theorem (in the real or complex case) then provides a proof of the existence and uniqueness of the partial fraction decomposition, and a characterization of the coefficients.
Sketch of the proof: The above partial fraction decomposition implies, for each 1 ≤ i ≤ r, a polynomial expansion
${\displaystyle {\frac {P}{Q_{i}}}=A_{i}+O((x-\lambda _{i})^{\nu _{i}})\qquad }$, as ${\displaystyle x\to \lambda _{i};}$
so ${\displaystyle \textstyle A_{i}}$ is the Taylor polynomial of ${\displaystyle \textstyle {\frac {P}{Q_{i}}}}$, because of the unicity of the polynomial expansion of order ${\displaystyle \textstyle \nu _{i}-1}$, and by assumption ${\displaystyle \textstyle \deg A_{i}<\nu _{i}}$.
Conversely, if the ${\displaystyle \textstyle A_{i}}$ are the Taylor polynomials, the above expansions at each ${\displaystyle \textstyle \lambda _{i}}$ hold, therefore we also have
${\displaystyle P-Q_{i}A_{i}=O((x-\lambda _{i})^{\nu _{i}})\qquad }$, as ${\displaystyle x\to \lambda _{i},}$
which implies that the polynomial ${\displaystyle \textstyle P-Q_{i}A_{i}}$ is divisible by ${\displaystyle \textstyle (x-\lambda _{i})^{\nu _{i}}.}$
## Fractions of integers
The idea of partial fractions can be generalized to other integral domains, say the ring of integers where prime numbers take the role of irreducible denominators. For example:
${\displaystyle {\frac {1}{18}}={\frac {1}{2}}-{\frac {1}{3}}-{\frac {1}{3^{2}}}.}$
## Notes
1. {{#invoke:citation/CS1|citation |CitationClass=book }}
## References
|CitationClass=book }}
|CitationClass=citation }}
|CitationClass=journal }} | 2021-06-18 20:25:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 107, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405738711357117, "perplexity": 393.07738674552104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00130.warc.gz"} |
https://physics.stackexchange.com/questions/128337/phase-space-derivation-of-quantum-harmonic-oscillator-partition-function?noredirect=1 | # Phase space derivation of quantum harmonic oscillator partition function
I would like to derive the partition function for the quantum Harmonic oscillator from scratch:
$$\tag{1} Z = \int dp \, dx\, e^{-\beta H}.$$
The free particle appears in many textbooks. $H = p^2$ so it is a Gaussian integral
$$\tag{2} Z = \int dp \, dx \, e^{-\beta p^2} = L \int dp \, e^{-\beta p^2} = L \sqrt{2\pi/\beta}.$$
I wanted to do the same calculation for the Harmonic oscillator I get could take advantage that
$$\tag{3} Z = \int dp \, dx \, e^{-\beta (p^2 + x^2)} = \int dx \, e^{-\beta x^2} \cdot \int dp \, e^{-\beta p^2 } = \sqrt{2\pi} \cdot \sqrt{2\pi} = 2\pi .$$
Or I could integrate over the circles
$$\tag{4} H = p^2 + x^2 = E$$
take advantage the Hamiltonian is symmetric on rotation in phase space
$$\tag{5} Z = \int dp \, dx \, e^{-\beta (p^2 + x^2)} = \int dE \, d\theta \, E \, e^{-\beta E} = 2\pi.$$
It seems I have forgotten that energy levels are quantized, so I should integrate over the circles
$$\tag{6} H = E_n = \hbar \omega (n + \tfrac{1}{2}).$$
Where are the $E_n$ wavefunctions $\psi_n$ located in phase space? By Heisenberg uncertainty, we can't specify both $(p,x)$ in the phase plane. Are they equidistributed on the circle $p^2 + x^2 = E$?
• For a quantum system, phase space is not a good concept. You need to know the Hilbert space of the system (e.g. by knowing the energy levels), and then calculate $\mathrm{Tr}(\mathrm{e}^{-\beta H})$ on it. This is not the same as $\int\mathrm{d}x\mathrm{d}p\mathrm{e}^{-\beta H}$ in general. – ACuriousMind Jul 27 '14 at 17:40
• @ACuriousMind I want to understand the connection between the canonical ensemble in quantum statistical mechanics and classical stastical mechanics. This seems to boil down to the relationship between the phase space and the Hilbert space. – john mangual Jul 27 '14 at 17:47
• @ACuriousMind there seems to be a forulation of phase space for quantum mechAnics by Moyal – john mangual Jul 27 '14 at 17:50
(Not really an answer, but as one should not state such things in comments, I'm putting it here)
You commented: "This seems to boil down to the relationship between the phase space and the Hilbert space."
That's a deep question. I recommend reading Urs Schreiber's excellent post on how one gets from the phase space to the operators on a Hilbert space in a natural fashion. I'm not certain how the Wigner/Moyal picture of QM relates to quantum statistical mechanics, since we define the quantum canonical partition function to be $Z(\beta) := \mathrm{Tr}(\mathrm{e}^{\beta H})$ on the Hilbert space of states, as we basically draw the analogy that the classical phase space is the "space of states" for our classical theory, and the integral the trace over it, and generalize that to the quantum theory.
Also note that, in a quantum world, $\int\mathrm{d}x\mathrm{d}p\mathrm{e}^{-\beta H}$ is a bit of a non-sensical expression, since $H$ is an operator - the result of this would not be a number, which the partition function certainly should be.
Well, first of all, it is important to realize that the integrals (1) and (3) are not merely ordinary double integrals over a single $x$- and a single $p$-variable. Instead they are (Wick-rotated) path integrals containing, heuristically speaking, infinitely many integrations.
The path integral derivation of the free particle and the harmonic oscillator can be found in many textbooks on quantum mechanics, see e.g. Refs. 1-3. For the free particle, see also e.g. this Phys.SE post.
Typically, one imposes Dirichlet Boundary Conditions (DBC)
$$\tag{D} x(t_i)=x_i\quad\text{and}\quad x(t_f)=x_f$$
on the position variable $x$ at initial and final times, $t_i$ and $t_f$, respectively. The next step is to "sum/integrate over all histories" consistent with eq. (D), cf. Refs. 1-3.
As OP correctly observes, eq. (D) implies that the corresponding initial and final momentum variables $p(t_i)$ and $p(t_f)$ remain unknown according to the Heisenberg uncertainty principle.
References:
1. R.P. Feynman and A.R. Hibbs, Quantum Mechanics and Path Integrals, 1965; Sections 3.
2. J.J. Sakurai, Modern Quantum Mechanics, 1994, Section 2.5.
3. J. Polchinski, String Theory, Vol. 1, 1998; Appendix A.1. | 2021-04-17 10:52:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722517490386963, "perplexity": 273.67021747400236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00376.warc.gz"} |
http://www.reference.com/browse/Hyperbolic+equilibrium+point | Definitions
# Hyperbolic equilibrium point
In mathematics, especially in the study of dynamical system, a hyperbolic equilibrium point or hyperbolic fixed point is a special type of fixed point.
The Hartman-Grobman theorem states that the orbit structure of a dynamical system in the neighbourhood of a hyperbolic fixed point is topologically equivalent to the orbit structure of the linearized dynamical system.
## Definition
Let
$F: mathbb\left\{R\right\}^n to mathbb\left\{R\right\}^n$
be a C1 (that is, continuously differentiable) vector field with fixed point p and let J denote the Jacobian matrix of F at p. If the matrix J has no eigenvalues with zero real parts then p is called hyperbolic. Hyperbolic fixed points may also be called hyperbolic critical points or elementary critical points.
## Example
Consider the nonlinear system
$frac\left\{ dx \right\}\left\{ dt \right\} = y,$
$frac\left\{ dy \right\}\left\{ dt \right\} = -x-x^3-alpha y,~ alpha ne 0$
$\left(0,0\right)$ is the only equilibrium point. The linearization at the equilibrium is
$J\left(0,0\right) = begin\left\{pmatrix\right\}$
0 & 1 -1 & -alpha end{pmatrix}.
The eigenvalues of this matrix are $frac\left\{-alpha pm sqrt\left\{alpha^2-4\right\} \right\}\left\{2\right\}$. For all values of $alpha ne 0$, the eigenvalues have non-zero real part. Thus, this equilibrium point is a hyperbolic equilbrium point. The linearized system will behave similar to the non-linear system near $\left(0,0\right)$. When $alpha=0$, the system has a nonhyperbolic equilibrium at $\left(0,0\right)$.
In the case of an infinite dimensional system - for example systems involving a time delay - the notion of the "hyperbolic part of the spectrum" refers to the above property. | 2014-04-18 17:24:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515014290809631, "perplexity": 286.8064771789119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.truckin24.de/oshkosh-m-1000-engl/ | # Oshkosh M-1000, engl.
Everything Oshkosh had to do with airport fire engines was known as the M Series – Aircraft crash/fire/rescue vehicles.
These included vehicles with 4×4, 6×6 and 8×8 drives. At that time, this M-1000 belonged to the small fire engines of the M-series with 4×4 drive, where the 1000 stands for the capacity of the water tank, namely 1000 gallons (3785 liters).
In the 1970s, when Oshkosh already had a very extensive range of airport fire engines, the vehicles were categorized according to the size of the water tanks.
• 4000 Liter – 1057 Gallonen
• 6000 Liter – 1585 Gallonen
• 9000 Liter – 2378 Gallonen
• 12000 Liter – 3170 Gallonen
• 15000 Liter – 3963 Gallonen
• 23000 Liter – 6076 Gallonen | 2022-08-13 18:17:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257622718811035, "perplexity": 8755.93529574488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00130.warc.gz"} |
https://www.transtutors.com/questions/9-3-here-we-need-to-calculate-the-ratio-of-average-net-income-to-average-book-value--1309391.htm | # 9.3 Here we need to calculate the ratio of average net income to average book value to get the...
9.3 Here we need to calculate the ratio of average net income to average book value to get the AAR. Average net income is:
Average net income 5 ($2,000 1 4,000 1 6,000)y3 5$4,000 Average book value is:
Average book value 5 $12,000y2 5$6,000 So the average accounting return is:
• ### 1) The Annual Report All publicly traded entities have an obligation to report financial highlights.
(Solved) July 27, 2015
different stakeholders—including shareholders, potential investors, other companies in the same industry, the media that covers that particular industry, and financial analysts. Consider the various uses of the public annual report. Then, select two different audiences who might view the annual...
1. Annual report is the principal document which is used by the companies to disclose the corporate information to the shareholders. It is usually a state of the company report which...
• ### 5. Average Accounting Return Concerning AAR: a. Describe how the average accounting return is...
November 07, 2015
5 . Average Accounting Return Concerning AAR : a. Describe how the average accounting return is usually calculated, and describe the information this measure provides about a sequence of cash flows. What is the AAR criterion decision rule? b. What are the problems associated
• ### Problem 1 How does P & P determine the value for Levels 1, 2 and 3 investments? Problem 3 Why would
(Solved) August 07, 2016
to the industry? Ratio Formula Use Evaluation as?Compared to the Industry Average Industry Average Net Profit Margin 3. 1 % Price to free cash flow -84.2 Return on Equity 5 .4% Total Debt/Equity 77.9 Dividend Yield 3.6%
Problem 1 How does P & P determine the value for Levels 1, 2 and 3 investments? Problem 3 Why would foreign currency translation decrease from 2013 to 2014? Problem 5 Using the following... | 2018-09-25 19:32:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.260925829410553, "perplexity": 3808.122316863444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.83/warc/CC-MAIN-20180925182856-20180925203256-00546.warc.gz"} |
https://tttapa.github.io/Pages/Mathematics/Systems-and-Control-Theory/Digital-filters/Exponential%20Moving%20Average/Exponential-Moving-Average.html | ### Difference equation
The difference equation of an exponential moving average filter is very simple: $y\left[n\right]=\alpha x\left[n\right]+\left(1-\alpha \right)y\left[n-1\right]$ In this equation, $y\left[n\right]$ is the current output, $y\left[n-1\right]$ is the previous output, and $x\left[n\right]$ is the current input; $\alpha$ is a number between 0 and 1. If $\alpha =1$, the output is just equal to the input, and no filtering takes place.
The filter is called 'exponential', because the weighting factor of previous inputs decreases exponentially. This can be easily demonstrated by substituting the previous outputs: $\begin{array}{rl}y\left[n\right]& =\alpha x\left[n\right]+\left(1-\alpha \right)y\left[n-1\right]\\ & =\alpha x\left[n\right]+\left(1-\alpha \right)\left(\alpha x\left[n-1\right]+\left(1-\alpha \right)y\left[n-2\right]\right)\\ & =\alpha x\left[n\right]+\left(1-\alpha \right)\left(\alpha x\left[n-1\right]+\left(1-\alpha \right)\left(\alpha x\left[n-2\right]+\left(1-\alpha \right)y\left[n-3\right]\right)\right)\\ & \dots \\ & =\alpha \sum _{k=0}^{n}{\left(1-\alpha \right)}^{k}x\left[n-k\right]\end{array}$
### Impulse and step response
From the previous equation, we can now easily calculate the impulse and step response.
The impulse response is the output of the filter when a Kronecker delta function is applied to the input.
Recall the definition of the Kronecker delta: $\delta \left[n\right]=\left\{\begin{array}{ll}1& n=0\\ 0& n\ne 0.\end{array}$ The impulse response of the EMA is $\begin{array}{rl}{y}_{impulse}\left[n\right]& =h\left[n\right]\\ & =\alpha \sum _{k=0}^{n}{\left(1-\alpha \right)}^{k}\delta \left[n-k\right]\\ & =\alpha \left(1-\alpha {\right)}^{n}.\end{array}$ For example, if $\alpha =0.25$, the impulse response is as follows:
The step response is the output of the filter when a Heaviside step function is applied to the input.
The Heaviside step function is defined as $H\left[n\right]=\left\{\begin{array}{ll}0& n<0\\ 1& n\ge 0.\end{array}$ The step response of the EMA is $\begin{array}{rl}{y}_{step}\left[n\right]& =\alpha \sum _{k=0}^{n}{\left(1-\alpha \right)}^{k}H\left[n-k\right]\\ & =\alpha \sum _{k=0}^{n}{\left(1-\alpha \right)}^{k}\\ & =1-{\left(1-\alpha \right)}^{n+1}.\end{array}$ For example, if $\alpha =0.25$, the step response is as follows:
### Transfer function
The output of discrete-time linear time-invariant (DTLTI) systems, of which the EMA is an example, can be expressed as the convolution of the input with the impulse response. In other words, the impulse response describes the behavior of the system, for all possible inputs.
To prove the previous statement, we'll start with the following trivial property: any signal $x\left[n\right]$ can be expressed as a convolution of the Kronecker delta function with itself, that is You can easily see that all terms where $k\ne n$ are zero, because the Kronecker delta is zero in that case. Only the term for $k=n$ is non-zero, in which case the Kronecker delta is one, so the result is just $x\left[n\right]$.
You can also interpret this as the signal being made up of a sum of infinitely many scaled and shifted Kronecker delta functions.
Let $T$ be the transformation performed by the DTLTI system, then $y\left[n\right]$ is the output after applying $T$ to the input signal $x\left[n\right]$. The following derivation makes use of the fact that $T$ is a linear transformation and that it is time-invariant: Since the factor $x\left[k\right]$ is independent of time $n$, it can be moved outside of the $T$ operator. $T$ applied to the Kronecker delta is (by definition) the impulse response of $T$, $h\left[n\right]$, but shifted by $k$ time steps.
Analysis of such systems is usually easier in the Z-domain, in which the convolution is reduced to a simple product.
The (unilateral) Z-transform is defined as: $\mathcal{Z}\left\{x\left[n\right]\right\}=\sum _{n=0}^{\mathrm{\infty }}x\left[n\right]{z}^{-n}$ If $X\left(z\right)=\mathcal{Z}\left\{x\left[n\right]\right\}$, $Y\left(z\right)=\mathcal{Z}\left\{y\left[n\right]\right\}$ and $H\left(z\right)=\mathcal{Z}\left\{h\left[n\right]\right\}$ are the Z-transforms of the input, output and impulse response respectively, then: $\begin{array}{rl}\mathcal{Z}\left\{y\left[n\right]\right\}& =\mathcal{Z}\left\{x\left[n\right]\ast h\left[n\right]\right\}\\ Y\left(z\right)& =\sum _{n=0}^{\mathrm{\infty }}x\left[n\right]\ast h\left[n\right]{z}^{-n}\\ & =\sum _{n=0}^{\mathrm{\infty }}\sum _{k=0}^{\mathrm{\infty }}x\left[k\right]h\left[n-k\right]{z}^{-n}\\ & =\sum _{k=0}^{\mathrm{\infty }}\sum _{n=0}^{\mathrm{\infty }}x\left[k\right]h\left[n-k\right]{z}^{-n+k-k}\\ & =\sum _{k=0}^{\mathrm{\infty }}x\left[k\right]{z}^{-k}\sum _{n=0}^{\mathrm{\infty }}h\left[n-k\right]{z}^{-\left(n-k\right)}\\ Y\left(z\right)& =X\left(z\right)H\left(z\right)\\ H\left(z\right)& =\frac{Y\left(z\right)}{X\left(z\right)}\end{array}$ $H\left(z\right)$ is called the transfer function of the system.
Let's calculate the transfer function of the EMA.
We can use one of two approaches: use the difference equation and use some of the properties of the Z-transform to calculate $\frac{Y\left(z\right)}{X\left(z\right)}$, or apply the definition of the Z-transform to the impulse response $h\left[n\right]$ directly.
#### Using the difference equation
All you have to do is apply the time shifting property of the Z transform: $\mathrm{\forall }{n}_{0}\in \mathbb{N}:\mathcal{Z}\left\{y\left[n-{n}_{0}\right]\right\}={z}^{-{n}_{0}}\mathcal{Z}\left\{y\left[n\right]\right\}$ Then just rearrange the terms. $\begin{array}{rl}y\left[n\right]& =\alpha x\left[n\right]+\left(1-\alpha \right)y\left[n-1\right]\\ \mathcal{Z}\left\{y\left[n\right]\right\}& =\mathcal{Z}\left\{\alpha x\left[n\right]+\left(1-\alpha \right)y\left[n-1\right]\right\}\\ & =\alpha \mathcal{Z}\left\{x\left[n\right]\right\}+\left(1-\alpha \right)\mathcal{Z}\left\{y\left[n-1\right]\right\}\\ & =\alpha \mathcal{Z}\left\{x\left[n\right]\right\}+\left(1-\alpha \right){z}^{-1}\mathcal{Z}\left\{y\left[n\right]\right\}\\ Y\left(z\right)& =\alpha X\left(z\right)+\left(1-\alpha \right){z}^{-1}Y\left(z\right)\\ Y\left(z\right)-\left(1-\alpha \right){z}^{-1}Y\left(z\right)& =\alpha X\left(z\right)\\ \frac{Y\left(z\right)}{X\left(z\right)}& =\frac{\alpha }{1-\left(1-\alpha \right){z}^{-1}}\\ H\left(z\right)& =\frac{\alpha }{1-\left(1-\alpha \right){z}^{-1}}\end{array}$
#### Using the impulse response
$\begin{array}{rl}H\left(z\right)& =\mathcal{Z}\left\{h\left[n\right]\right\}\\ & =\sum _{n=0}^{\mathrm{\infty }}h\left[n\right]{z}^{-n}\\ & =\alpha \sum _{n=0}^{\mathrm{\infty }}\left(1-\alpha {\right)}^{n}{z}^{-n}\\ & =\alpha \sum _{n=0}^{\mathrm{\infty }}{\left(\frac{1-\alpha }{z}\right)}^{n}\\ & =\frac{\alpha }{1-\frac{1-\alpha }{z}}\end{array}$ The last step is only valid if the sum converges, this is the case for . (See Infinite Geometric Series)
#### Poles and zeros
In these expressions, $z$ is a complex variable, and $H\left(z\right)$ is a complex function.
There are a couple of interesting values for $z$: values that result in the numerator becoming zero, called zeros, and values that result in the denominator becoming zero, called poles. $\begin{array}{rl}H\left(z\right)& =\frac{\alpha }{1-\frac{1-\alpha }{z}}\\ & =\frac{\alpha z}{z-\left(1-\alpha \right)}\end{array}$ By rewriting the transfer function, we can easily see that $z=0$ is a zero, and $z=1-\alpha$ is a pole.
The poles and zeros determine the overall effect of the transfer function, so pole-zero plots are a very useful tool when describing filters.
This is the pole-zero plot of the same example EMA as before, with $\alpha =0.25$. The zero in the origin is indicated by an O, and the pole at $1-\alpha =0.75$ by an X.
### Frequency response
An important property of discrete-time linear time-invariant systems is that it preserves the pulsatance (angular frequency) of sinusoidal signals, only the phase shift and the amplitude are altered. In other words, sines and cosines are eigenfunctions of DTLTI systems.
This makes it relatively easy to express the frequency response (sometimes called magnitude response) of a filter.
We're interested in the spectrum of frequencies that a signal contains, so it makes sense to decompose it as a sum of sines and cosines. That's exactly what the discrete-time Fourier transform does: ${X}_{2\pi }\left(\omega \right)=\sum _{n=-\mathrm{\infty }}^{\mathrm{\infty }}x\left[n\right]{e}^{-i\omega n}$ Note that this is just a special case of the Z-transform, where $z={e}^{i\omega }$.
The frequency response of the filter describes how the spectrum of a signal is altered when it passes through the filter. It relates the spectrum of the output signal $Y\left(\omega \right)$ to the spectrum of the input signal $X\left(\omega \right)$. We already had an expression for the spectrum of the output divided by the spectrum of the input, we called it the transfer function $H\left(z\right)=Y\left(z\right)/X\left(z\right)$. To get the frequency response of the filter, we can just evaluate the transfer function for $z={e}^{i\omega }$. Also note that this is the DTFT of the impulse response $h\left[k\right]$. $\begin{array}{rl}{\mathcal{F}}_{DTFT}\left\{h\left[k\right]\right\}& =\sum _{n=-\mathrm{\infty }}^{\mathrm{\infty }}h\left[n\right]{e}^{-i\omega n}\\ & =H\left({e}^{i\omega }\right)\\ & =\frac{\alpha }{1-\left(1-\alpha \right){e}^{-i\omega }}\end{array}$ We can now calculate the amplitude of each frequency component in the output by taking the modulus of $H\left({e}^{i\omega }\right)$. For reasons that will become apparent in a minute, we'll calculate the modulus squared. We use Euler's formula for writing the exponential as a sines and cosines. $\begin{array}{rl}|H\left({e}^{i\omega }\right){|}^{2}& =\frac{{\alpha }^{2}}{|1-\left(1-\alpha \right){e}^{-i\omega }{|}^{2}}\\ & =\frac{{\alpha }^{2}}{|1-\left(1-\alpha \right)\left(\mathrm{cos}\left(-\omega \right)+i\mathrm{sin}\left(-\omega \right)\right){|}^{2}}\\ & =\frac{{\alpha }^{2}}{\left(1-\left(1-\alpha \right)\mathrm{cos}\left(\omega \right){\right)}^{2}+\left(\left(1-\alpha \right)\mathrm{sin}\left(\omega \right){\right)}^{2}}\\ & =\frac{{\alpha }^{2}}{1-2\left(1-\alpha \right)\mathrm{cos}\left(\omega \right)+\left(1-\alpha {\right)}^{2}{\mathrm{cos}}^{2}\left(\omega \right)+\left(1-\alpha {\right)}^{2}{\mathrm{sin}}^{2}\left(\omega \right)}\\ & =\frac{{\alpha }^{2}}{1-2\left(1-\alpha \right)\mathrm{cos}\left(\omega \right)+\left(1-\alpha {\right)}^{2}}\end{array}$ $\omega$ is the normalized pulsatance in radians per sample. You can substitute it with $\omega =\frac{2\pi f}{{f}_{s}}$ where $f$ is the frequency in Hertz, and ${f}_{s}$ is the sample frequency of the system in Hertz.
We can now plot the filter's gain in function of the frequency. These plots often use a logarithmic scale, to show the gain in decibels. In order to calculate the power gain, the amplitude is squared. Note that when a frequency is not present in the output signal, the gain will be . If a frequency has an amplitude of one in the output signal, the gain will be . You can clearly see the low-pass behavior of the EMA: low frequencies have a near-unit gain, and high frequencies are attenuated.
To get a better understanding of where this curve comes from, we can plot the entire $|H\left(z\right)|$ surface in the Z-domain. As mentioned above, the DTFT is just a special case of the Z-transform, where $z={e}^{i\omega }$, i.e. the unit circle in the complex plane. Remember that the point $z={e}^{i\omega }$ is a point with a distance of $1$ to the origin, and with an angle of between its position vector and the positive x axis. The image of the unit circle is shown in blue. Notice that this is the same curve as the blue curve in the magnitude response graph above: close to 0 when $\omega \to 0$ (the right half of the circle) and negative when $\omega \to \pi$ (the left half of the circle).
You can clearly see the effect of the pole at $0.75+0i$. The zero in the origin has no effect on the frequency response, because it doesn't alter the image of the unit circle, since $|{e}^{i\omega }|=1$ or $\mathrm{log}\left(|{e}^{i\omega }|\right)=0$.
#### Cutoff frequency
The cutoff frequency is defined as the frequency of the half-power point, where the power gain is a half. It's often called the -point, because .
To find it, just solve the following equation: $|H\left({e}^{i{\omega }_{c}}\right){|}^{2}=\frac{1}{2}$ $\begin{array}{rl}\frac{{\alpha }^{2}}{1-2\left(1-\alpha \right)\mathrm{cos}\left({\omega }_{c}\right)+\left(1-\alpha {\right)}^{2}}& =\frac{1}{2}\\ {\omega }_{c}=\mathrm{arccos}\left(\frac{{\alpha }^{2}+2\alpha -2}{2\alpha -2}\right)\end{array}$
For example, if , then ${\omega }_{c}=\mathrm{arccos}\left(\frac{23}{24}\right)\approx 0.2897\frac{rad}{sample}$.
To convert it to a frequency in Hertz, you can multiply $\omega$ by $\frac{{f}_{s}}{2\pi }$, with ${f}_{s}$ the sample frequency.
For example, if the sample frequency is , then .
#### Plotting the frequency response in Python
We can use the SciPy and Matplotlib modules to plot the frequency response in Python.
The SciPy freqz function expects the transfer function coefficients in the form $H\left(z\right)=\frac{{b}_{0}+{b}_{1}{z}^{-1}+{b}_{2}{z}^{-2}+\cdots +{b}_{p}{z}^{-p}}{{a}_{0}+{a}_{1}{z}^{-1}+{a}_{2}{z}^{-2}+\cdots +{a}_{q}{z}^{-q}}.$ This is the reverse of the usual ordering of polynomial coefficients.
In the case of the exponential moving average filter, the transfer function is $H\left(z\right)=\frac{\alpha }{1+\left(\alpha -1\right){z}^{-1},}$ so ${b}_{0}=\alpha$, ${a}_{0}=1$ and ${a}_{1}=\alpha -1$.
from scipy.signal import freqz
import matplotlib.pyplot as plt
from math import pi, acos
import numpy as np
alpha = 0.25
b = np.array(alpha)
a = np.array((1, alpha - 1))
print("b =", b) # Print the coefficients
print("a =", a)
x = (alpha**2 + 2*alpha - 2) / (2*alpha - 2)
w_c = acos(x) # Calculate the cut-off frequency
w, h = freqz(b, a) # Calculate the frequency response
plt.subplot(2, 1, 1) # Plot the amplitude response
plt.suptitle('Bode Plot')
plt.plot(w, 20 * np.log10(abs(h))) # Convert to dB
plt.ylabel('Magnitude [dB]')
plt.xlim(0, pi)
plt.ylim(-18, 1)
plt.axvline(w_c, color='red')
plt.axhline(-3, linewidth=0.8, color='black', linestyle=':')
plt.subplot(2, 1, 2) # Plot the phase response
plt.plot(w, 180 * np.angle(h) / pi) # Convert argument to degrees
plt.xlabel('Frequency [rad/sample]')
plt.ylabel('Phase [°]')
plt.xlim(0, pi)
plt.ylim(-90, 90)
plt.yticks([-90, -45, 0, 45, 90])
plt.axvline(w_c, color='red')
plt.show() | 2021-10-21 11:58:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 82, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270180463790894, "perplexity": 275.69058804496626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00284.warc.gz"} |
http://eeer.org/journal/view.php?number=789 | Environ Eng Res > Volume 21(3); 2016 > Article
Park, Sheppard, Yu, Dolan, Eom, Brooks, and Borgatti: Comparative assessment on the influences of effluents from conventional activated sludge and biological nutrient removal processes on algal bloom in receiving waters
### Abstract
The goal of this study was to evaluate the effect of effluents from conventional activated sludge (CAS) and biological nutrient removal (BNR) processes on algal bloom in receiving waters. We made multiple effluent sampling from one CAS and two BNR facilities, characterized their effluents, and conducted bioassay using river and ocean water. The bioassay results showed that CAS effluents brought similar productivity in both river and ocean water, while BNR effluents were more reactive and productive in ocean water. Unexpectedly, nitrogen-based biomass yields in ocean water were up to six times larger for BNR effluents than CAS effluent. These results indicated that nitrogen in BNR effluents, although its total concentration is lower than that of CAS effluent, is more reactive and productive in ocean water. The ocean water bioassay further revealed that effluents of BNR and CAS led to considerably different phytoplankton community, indicating that different characteristics of effluents could also result in different types of algal bloom in receiving waters. The present study suggests that effects of upgrading CAS to BNR processes on algal bloom in receiving waters, especially in estuary and ocean, should be further examined.
### 1. Introduction
Eutrophication, or increased algal blooms due to excessive nutrient loading to a body of water, is a problem for numerous water bodies throughout the world. One such affected water body is the Long Island Sound (LIS), an estuary bordered by the states of New York and Connecticut on three sides and open to the Atlantic Ocean on its fourth side. The LIS has been experiencing seasonal hypoxia and subsequent fish kills for years, especially in the bottom waters on its western border [1]. In order to improve conditions on the LIS, an agreement between the U.S. EPA and the state governments of New York and Connecticut was reached [2]. The affected states were required to first freeze nitrogen loadings to the LIS at 1990 levels and then eventually reduce N loadings (from human sources) to the LIS by 58.5% by the year 2014 [2]. One major step taken to achieve this goal was the upgrade of numerous wastewater treatment plants (WWTPs) from conventional activated sludge (CAS) processes to biological nutrient removal (BNR) processes. As of approximately ten years after the initial implementation of the plan, around \$650 million had been spent on reducing N loadings from point sources [1]. Despite these efforts the LIS area affected by hypoxia in 2006 was actually larger than the area affected in 1991 [3], and eutrophication and hypoxia problems have continued to occur in LIS [4].
Traditionally, BNR processes remove dissolved inorganic N (DIN) by adopting extended solids retention time (SRT) with aerobic nitrification and anoxic denitrification in the main stream of wastewater treatment [5]. In spite of vigorous biological wastewater treatment involved in BNR processes, dissolved organic N (DON) remains persistent through treatment, leading to high DON/DIN in low TN effluents [68].
More studies have reported that phytoplankton can use DON as a source of N [7, 911] and it could actually lead to higher algal biomass yield than DIN [10]. The marine diatom Thalassiosira has been recently sequenced [12] and it was found to have genes for plasma membrane amino acid transporters (reviewed by Bronk et al. [11]), providing genetic evidence that some phytoplankton can directly use amino acids, a pool of DON, for their metabolism. It can also be inferred that uptake of DON compared to DIN could lead to less energy expense for algal anabolism, possibly accounting for higher growth yields. Literature has also shown that DON as a primary N source could select different phytoplankton community than DIN. Berman and Chava [10] showed that N-fixing cyanobacteria proliferated in the media containing urea (DON) as sole N source than DIN. In a related context, Gilbert et al. [13] found that cyanobacterial bloom in Florida Bay has been associated with the highest concentration of DON. All this information implies that BNR effluents, with low TN but high DON/DIN, can render DON to become more readily available and spur the growth of certain phytoplankton community.
What might also be important to understand is the effect of types or characteristics (e.g., hydrophobicity, size, etc.) of DON on algal blooms in the receiving water. Berman and Chava [1999] made a proposition that different types of DON also affected the structure of the phytoplankton community. Our earlier study [8] investigated effluent proteins from three full-scale WWTPs (two CAS and one BNR facilities) and found that the type of proteins present in CAS and BNR effluents were different, indicating that the characteristics of DON in CAS and BNR effluents could also be considerably different. In addition, Westgate and Park [8] found that there was a difference for CAS and BNR effluents with respect to the profile of proteolytic enzymes. The protease profile was very similar for three plant influents but its profile in BNR effluent was more diverse than those in CAS effluents. These suggested that more complex proteases in BNR effluents, which would enter into the receiving water, resulted from the more complicated biological processes used in BNR systems. Consequently, it can be expected that when different DONs in BNR and CAS effluents enter into the receiving water they may also lead to different effects on nutrient cycling and algal growth.
The information provided in literature and findings from our previous research led us to develop a hypothesis that upgrading of WWTPs from CAS to BNR processes may not necessarily bring reduction in the events of algal blooms but cause different types of algal blooms, possibly with more abundant algal growth. To test this hypothesis, we have conducted laboratory bioassay experiments on field CAS and BNR effluents using both Connecticut River and LIS water; the bioassay on Connecticut River was included because it is the largest freshwater draining to LIS and also to learn the effect of same effluents on algal growth in both river and ocean water. The focus of the study was to find the biomass yield of CAS and BNR effluents, based on consumption of their nitrogen. During the study, we also characterized differences in the algal community that results from incubating BNR and CAS effluents in the same LIS water.
### 2.1. Sampling of Full-Scale Wastewater Treatment Effluents
Samples of secondary effluent were collected from three WWTPs (one CAS and two BNR facilities) in winter, spring, and early summer period. CAS and BNR1 facilities are located in Western Massachusetts while the BNR2 facility is located farther downstream in Connecticut. All three facilities discharge their effluents to the Connecticut River permanently draining into LIS. The CAS facility (7.1 MGD design flow) uses a conventional activated sludge system with mechanical aeration at a solids retention time (SRT) in the range of 5–10 days. The BNR1 facility (67 MGD) uses the Ludzack-Ettinger (LE) process with a relatively long SRT, 25 days. The BNR2 facility (2.1 MGD) adopts a modified LE (MLE) process in the SRT of 15 days. Effluent samples were transported to the University of Massachusetts laboratory immediately after collection and were stored at 4°C (within two hours of collection). Within 24 hours of storage, whole effluent samples or effluents passed through a 0.45 μm filter (Millipore Express PLUS filter, Millipore, Billerica, MA, USA) were used for bioassay or transferred to separate 15 mL vials, frozen, and stored for later chemical analysis.
### 2.2. Sampling of Receiving Water
Receiving water samples for bioassay were collected from the Connecticut River and LIS. Samples from the Connecticut River were collected from a boat dock to avoid collecting organic matter from plants and other materials in the shallow water near the banks. The dock was located at a point along the Connecticut River in Hadley, MA and the travel time of the water from this point to LIS is approximately one day (C. Brown, personal communication). Water from LIS was collected from the White Sand Beach in Old Lyme, CT near the point where the Connecticut River drains into the Sound. Samples were collected away from the shore in order to avoid collecting large amounts of sand and other particles. Receiving water was transported to the laboratory immediately after collection and stored at 4°C (within 2 hours of collection). Samples of receiving water were then passed through a 100 μm Nylon Net filter for bioassay.
### 2.3. Set-up and Sampling of Bioassay Experiments
Bioassay experiments were conducted by mixing 1 L of effluent and 1 L of receiving water in a 2 L Pyrex bottle. This dilution rate does not mimic the natural dilution of effluents in the river and LIS, but was chosen to facilitate the study on the effects of different effluents on the same receiving waters. Note that a recent bioassay study [14], using higher dilution rates (4 and 10 times), still produced a similar trend of the data seen in the current study. The contents of the bottles were completely mixed using a stir bar and plate. The bioassay bottles were covered with 100 μm Nylon Net filter fabric to prevent dust from entering into the bottles. The bioassay took place on the bench top next to windows (windows making entire wall of the lab) for exposure to natural light and dark conditions. The laboratory room was air-conditioned and temperature was maintained at 21 ± 2°C. Sample collection times were varied between incubation sets and were based on the reactions observed in each bioassay. The December and March bioassay sets consisted of a single bioassay bottle for each effluent. The May bioassay set consisted of duplicates for each effluent.
### 2.4. Chemical Analysis
Inorganic nitrogen species (ammonium, nitrate, and nitrite) were measured using a Metrohm ion chromatograph (IC) (Metrohm, Herisau, Sz) and dissolved (< 0.45 μm) total nitrogen (DTN) was measured using a Shimadzu TN/TOC analyzer (Shimadzu TOC-VCPH with TNM-1, Shimadzu North America, SSI Inc., Columbia, MD, USA). The TOC/TN machine detects concentrations of TOC down to 0.4 mg/L as carbon and 0.2 mg/L of TN as nitrogen. The experimental lower detection limit of the IC varies for each ion measured. Nitrite and nitrate can be effectively measured at concentrations down to 5 μg/L and ammonium to 20 μg/L. DON was then calculated using the following equations:
$DON=DTN-(NH4++NO3-+NO2-)$
Phosphate (PO43−) concentrations in effluents were measured using the same IC used for inorganic nitrogen and its detection limit was 20 μg/L. Total and volatile suspended solids (TSS and VSS) were measured according to Standard Methods [15]. Protein concentrations were determined using both the original Lowry method [16] and the Frølund modification of the Lowry method [17], as the revised method accounts for the interference of humic substances. Both values were compared and, based on other nitrogen concentrations obtained, the modified protein values were chosen for the current study. The absorbance of the protein assay was read on a Thermospectronic Genesys 10 UV Spectrophotometer (Thermo Spectronic, Madison, WI, USA) and the standard curve was determined using known concentrations of bovine serum albumin (BSA) (Fisherbrand Scientific, Pittsburg, PA, USA). All protein concentrations in this study are expressed as mg/L-N: N accounts for 16% of BSA molecule [8].
### 2.5. Microscopic Analysis
To study diversity and population of phytoplankton growing in each bioassay bottle, various microscopic analyses were adopted during this study. For bioassays conducted in December and March periods, a light microscope (Olympus CH2, Japan) was used. During the May bioassay, we also conducted phase contrast microscopy (Nikon Labophot Microscope, Japan). To obtain sufficient algal cell concentration for phase contrast microscopic analyses, 10 mL of bioassay sample was first filtered through a 0.45 μm filter and filtered algal cells were re-suspended in 2 mL of sample filtrate. Wet mounts of the concentrated sample solutions were prepared and images of phase contrast microscopes were taken using a digital camera (Sony HD digital camera, Japan).
### 3. Results and Discussion
The results of the study are presented in three effluent sampling events and subsequent receiving water bioassays.
### Effluent Sampling and Bioassay in December
The first effluent sampling and bioassay were conducted for one CAS and one BNR (BNR1) facility. As expected, BNR1 effluent showed much lower dissolved total nitrogen (DTN) in comparison to CAS effluent (Table 1). BNR effluent showed a relatively similar composition of ammonia, nitrate, nitrite, and dissolved organic N (DON), making up DTN at 2.23 mg/L-N. DON comprised almost 30% of DTN, some of which was actually protein at 0.48 mg/L-N. In contrast, DTN in CAS effluent was mostly ammonia, and the shorter SRT (5–10 days) used in this facility accounted for this high ammonia in the effluent. DON in CAS effluent was higher in comparison to BNR effluent, but its fractional composition was much smaller than the counterpart. Both Connecticut River and LIS water showed very similar DTN at around 0.3 mg/L-N. The PO43− concentrations were similar between the two effluents, 0.72 mg/L-P for CAS and 1.04 mg/L-P for BNR.
For bioassay experiments, filtered (0.45 μm) effluents were incubated with the river water and LIS ocean water. It was visually observed that the ocean bioassay for BNR effluent led to the fastest algal growth. Light brownish algal biomass (mainly diatoms) started blooming on day 5, which was about four days earlier than all other bioassays. This observation can be further explained with the analysis of chemical data. Fig. 1 shows changes in DTN measured during the first bioassay. Consistent with the visual observation, it was the ocean bioassay for BNR effluent that showed the fastest N uptake. The N consumption in all other bioassays, including the river bioassay for BNR effluents, was much slower and similar to each other. These data indicated that N in BNR effluent was more reactive or more bioavailable in the ocean water.
We also measured soluble (< 0.45 μm) proteins, a significant fraction of DON [8], during the bioassay (Fig. 2). The fate of proteins in BNR-ocean bioassay was dynamic, showing early degradation, re-generation, and re-degradation. Depletion of soluble proteins occurred on day 4, which was a day prior to the first algal bloom observed, along with a sudden decrease in DTN (Fig. 1). This indicates that the bioavailability and degradation of proteins played an important role in the first algal bloom in the BNR-ocean bioassay. The dramatic increase and decrease of soluble proteins over the later period indicates that proteins were also released during phytoplankton growth and became bioavailable for further microbiological reactions. In contrast, ocean bioassay for CAS effluent and river bioassays for both CAS and BNR effluents did not show such change in soluble proteins throughout the incubation period. Furthermore, reduction of DTN in these sets was only seen after eight days of incubation (Fig. 1). These results suggest that high salinity in ocean water helped BNR-protein become bioavailable, which might be relevant to the results of Bronk et al. [18] who reported that degradation of effluent organic N in an estuary occurred via the salinity-mediated N release. The salinity-induced degradation, however, should not be a sole factor for our observation, because then a similar pattern should have also been observed in CAS-ocean bioassay. Therefore, it is likely that different characteristics of proteins and organic N in BNR effluent in comparison to CAS effluent [8] also account for the degradation of BNR-proteins in ocean water and early algal bloom in that particular bioassay.
The maximum biomass (VSS) growth, consumed N, and N-based biomass yields are summarized in Table 2. BNR effluent led to higher VSS yield for both river and LIS bioassays. The difference was particularly prominent for the ocean bioassay. The ocean bioassay with BNR effluent resulted in N-based VSS yield to be approximately six times greater than the CAS-ocean bioassay. These results indicate that N in BNR effluent was not only more reactive (Fig. 1, 2) but also more efficient in supporting biomass growth compared to N present in CAS effluent.
The VSS generation potential can be obtained by multiplying these N-based VSS yields with actual N concentration in each effluent. In this way, ocean bioassays with filtered BNR effluent would result in values of 80 mg VSS/L. The same approach would generate 93 mg VSS/L with CAS effluent. The difference is not that huge despite the fact that CAS effluent contained approximately eight times greater N than BNR effluent. These results imply that the type of DON present in BNR effluent is more potent to cause algal growth than DIN as well as DON in CAS effluent.
### Effluent Sampling and Bioassay in March
For the second set of bioassay, we included effluent from one more BNR facility (BNR2). This time, bioassays were conducted with only LIS ocean water and also whole (unfiltered) effluent. The summary of N composition in three effluents and receiving waters is shown in Table 3. The N data for CAS and BNR1 effluents were very similar with those from the December sampling event. Again, DON in BNR1 effluent was comprised of a large fraction, 38%, of DTN. DTN from the BNR2 facility was larger than we originally expected, since this facility usually achieves the effluent N in the range of 5–8 mg/L-N. Later, we learned that the higher effluent N from BNR2 facility was due to spring runoff that had occurred the day before sampling; the plant turned off their pre-anoxic operation and internal mixed liquor recirculation to deal with the higher influent flow rate. Nevertheless, DTN concentration was lower than that of CAS and was also mostly composed of nitrate and relatively high DON. For phosphate concentrations, BNR2 showed the highest concentration at 2.12 mg/L, followed by CAS at 1.74 mg/L, and by BNR1 at 1.16 mg/L.
Fig. 3 shows changes in total VSS and DTN that occurred during the second bioassay. In all three bioassays, almost all biomass growth and DTN consumption took place before 12 days of incubation. Nevertheless, the reaction patterns were considerably different among the bioassays. Consistent with the results from the December bioassay, effluent from BNR1 facility caused much faster growth reaction, which happened simultaneously with faster N uptake. In bioassay with BNR1 effluent, maximum VSS growth and N uptake were achieved within five days of incubation, which was seven days faster than the bioassay with CAS effluent. This result supports the earlier observation that N in BNR effluent, most likely DON in BNR effluent, is more reactive than DIN and DON in CAS effluent. Interestingly, in BNR2 bioassay, N was consumed fast but the growth of biomass was more steady and slower than BNR1 bioassay. Since the current BNR2 effluent contained high NO3, this result was likely attributed to the fast uptake of NO3; however, biomass growth was more similar to CAS effluent bioassay.
Biomass yield based on N consumption (Table 4) showed that BNR1 effluent led to much higher VSS yield in the ocean water in comparison to any other bioassay sets. The clear difference between VSS yield for CAS and BNR1 was consistent with the December bioassay set. This time, the COD yield of BNR1 was four times larger than that from CAS bioassay. The COD yield of BNR2 bioassay was not much different from the CAS bioassay. This result again implies that nitrate, the major N species in BNR2 effluent, led to faster algal growth in the ocean water, but the overall biomass yield, measured by N-based COD yield, was similar to the bioassay for CAS effluent which had ammonia as a dominant N species.
### Effluent sampling and bioassay in May
In May, we conducted our third effluent sampling and laboratory bioassay experiments. We collected effluents from the same facilities sampled in March. Effluent N concentrations for CAS and BNR1 were consistent for the May sample set as well (Table 5). It was noticeable that BNR2 effluent showed much lower DTN this time due to a normal MLE process occurring in the facility. It is also worth noting that DON in BNR2 effluent now comprised up to 38% of DTN.
Fig. 4 shows that BNR1 effluent led to the fastest VSS production, indicating more reactivity of BNR1 effluent in LIS water than any other effluents. In this bioassay, almost all biomass growth was achieved in four days of incubation. CAS effluent set led to a slower VSS growth but the maximum VSS value was greater than that of BNR1. The BNR2 effluent set showed the slowest VSS increase but there was sudden bloom on day 8, which is similar to the pattern in BNR1 bioassay.
Phase contrast microscopic analysis revealed that the first algal bloom that occurred in BNR1 set (on day 4) was caused by dominant growth of marine diatom Thalassiosira sp. and Skeletonema sp. (Fig. 5). Thalassiosira has been recently found to possess the genes that can express transporter proteins to uptake amino acids (Armbrust et al., 2004), indicating that they are able to directly uptake DON from the surrounding environment. A combination of their genetic capability and the characteristics of BNR1 effluent in ocean water likely caused the early Thalassiosira bloom in this bioassay. Since Thalassiosira and Skeletonema are among dominant species of algae found in LIS blooms on a biomass basis [1], our results emphasize the importance of DON in BNR effluents and its role in algal bloom in the receiving ocean water.
We continued algal image analysis during the current bioassay and additional results are presented in Fig. 6. Samples obtained from the BNR1 bioassay on days 7 and 10 showed mostly diatoms and a variety of diatoms in comparison to day 4. Samples from the CAS bioassay were dominated by diatom Cyclotella on day 7, and by a larger number of green algae than Cyclotella on day 12. This data indicates that the original algal communities that developed in BNR1 and CAS bioassays were substantially different due to different characteristics of the effluents. BNR2 bioassay was also observed to have a very different phytoplankton community. The microscopic images of the samples from BNR2 bioassay taken on day 7 and 11 showed a diatom bloom dominated by Chaetoceros. Overall, these results clearly suggest that different inputs of wastewater effluents select for different types of phytoplankton community in the same LIS water. This observation supports the proposition that different N species have significant effects on regulating the structure of the microalgal community [10].
Finally, N-based VSS yield for the May bioassay set is summarized in Table 6. BNR1 bioassay led to a higher biomass yield in comparison to the CAS bioassay, which has been a consistent finding throughout the current study. It is also worth noting that the bioassay for BNR2 effluent also caused higher VSS yield than CAS effluent, this time, indicating that BNR effluents led to effective VSS production.
### 3.1. Implications
The current study employed simple bioassay-based experiments to evaluate the effects of CAS and BNR effluents on algal blooms in the same receiving water. Although bioassays have limit in mimicking the natural processes, the results of this study support that N in BNR effluents is more productive than N in CAS effluents. The results seen in the December set for filtered effluents and the recent study [14] suggest that not only N-based biomass yields but total biomass generation could also be higher for BNR effluents than CAS effluents. These are certainly unintended results because many WWTPs are upgrading from CAS to BNR processes to comply with more stringent N regulations. We postulate that these unexpected results are caused because DON in BNR effluents is readily bioavailable, compared to DON in CAS effluents, and also more effective for supporting algal biomass than DIN. These results are not likely caused by different influent sources because our ongoing research using CAS and BNR effluents generated from the same source of wastewater also showed very similar results [14]. The effects of upgrading CAS to BNR processes on eutrophication in receiving ocean waters needs to be considered and more research warranted. Finally, the knowledge obtained from current and future research should be taken into consideration when establishing a permit-based effluent quality solution.
### 4. Conclusions
The specific conclusions that were drawn from this study are summarized as follows:
• The bioassay adopted in the current study showed that BNR effluents, compared to CAS effluents, produced up to six times higher N-based biomass yields in Long Island Sound water.
• BNR effluents resulted in faster N uptake and biomass growth in ocean water than river water, while CAS effluents behaved similarly in both river and ocean water.
• Both salinity and characteristics of BNR-originated DON should be responsible for faster reactions and increased biomass yield for BNR effluents in ocean water.
• Species of N and characteristics of DON input to an ocean receiving water play a key role in regulating the structure of phytoplankton community.
• Discharging BNR effluents may still cause algal bloom issues and associated hypoxic conditions in the N-limited ocean environment.
### Acknowledgements
This work was supported by the Springfield Water and Sewer Commission under 109-1698, 111-0016, 111-1388 and by the Massachusetts Water Resource Research Center under 2009MA186B. We gratefully acknowledge the wastewater treatment facility employees who have assisted us in sample collection and provided us with any data we sought.
### References
1. O’Shea ML, Brosnan TM. Trends in indicators of eutrophication in Western Long Island Sound and the Hudson-Raritan Estuary. Estuaries. 2000;23:877–901.
2. U.S. EPA, United States Environmental Protection Agency. Managing common estuarine environmental problems. Retrieved July 27, 2011, from http://water.epa.gov/type/oceb/nep/about4.cfm2011.
3. Stelloh T. 671M later, no clear picture of Sound’s health. The Advocate. 2007;1 A4
4. Suter EA, Lwiza KMM, Rose JM, Gobler C, Taylor GT. Phytoplankton assemblage changes during decadal decreases in nitrogen loadings to the urbanized Long Island Sound estuary, USA. Mar Ecol Prog Ser. 2014;497:51–67.
5. Grady CPL, Daigger GT, Lim HC. Biological wastewater treatment. 2nd EdMarcel Dekker, Inc; New York, NY: 1999.
6. Pehlivanoglu E, Sedlak DL. Bioavailability of wastewater-derived organic nitrogen to the alga Selenastrum Capricornutum. Water Res. 2004;38:3189–3196.
7. Urgun-Demirtas M, Sattayatewa C, Pagilla KR. Bioavailability of dissolved organic nitrogen in treated Effluents. Water Environ Res. 2008;80:397–406.
8. Westgate PJ, Park C. Evaluation of proteins and organic nitrogen in wastewater treatment effluents. Environ Sci Technol. 2010;44:5352–5357.
9. Berman T. Dissolved organic nitrogen utilization by an Aphanizomenon bloom in lake Kinneret. J Plankton Res. 1997;19:577–586.
10. Berman T, Chava S. Algal growth on organic compounds as nitrogen sources. J Plankton Res. 1999;21:1423–1437.
11. Bronk DA, See JH, Bradley P, Killberg L. DON as a source of bioavailable nitrogen for phytoplankton. Biogeosciences. 2007;4:283–296.
12. Armbrust EV, Berges JA, Bowler C. , et alThe genome of the diatom Thalassiosira Pseudonana: ecology, evolution, and metabolism. Science. 2004;306:79–86.
13. Glibert PM, Heil CA, Hollander D. , et alEvidence for dissolved organic nitrogen and phosphorus uptake during a cyanobacterial bloom in Florida Bay. Mar Ecol-Prog Ser. 2004;280:73–83.
14. Eom H. Investigation of effluent nitrogen derived from conventional activated sludge (CAS) and biological nutrient removal (BNR) systems and its impact on algal growth in receiving waters. PhD Dissertation. University of Massachusetts; Amherst, MA, USA: 2016.
15. APHA. Standard methods for the examination of water and wastewater. 21st edAmerican Public Health Association, American Water Works Association, Water Environment Federation; Washington D.C., USA: 2005.
16. Lowry OH, Rosebrough NJ, Farr AL, Randall RJ. Protein measurements with the folin phenol reagent. J Biol Chem. 1951;193:265–275.
17. Frølund B, Griebe T, Nielsen PH. Enzymatic activity in the activated sludge floc matrix. Appl Microbiol Biot. 1995;43:755–761.
18. Bronk DA, Roberts QN, Sanderson MP. , et alEffluent organic nitrogen (EON): bioavailability and photochemical and salinity-mediated release. Environ Sci Technol. 2010;44:5830–5835.
##### Fig. 1
Changes in dissolved total nitrogen (DTN) in the ocean and river bioassay for 0.45 μm filtered CAS and BNR effluent; the December bioassay.
##### Fig. 2
Changes in soluble protein in the ocean and river bioassay for CAS and BNR effluent; the December bioassay.
##### Fig. 3
Changes in total VSS and DTN in the ocean bioassay for whole effluent of CAS (top), BNR1 (middle), and BNR2 (bottom); the March bioassay.
##### Fig. 4
Changes in total biomass (VSS) in the ocean bioassay set for whole effluent of CAS, BNR1, and BNR2; the May bioassay. Results are the average of the duplicate bioassays. Error bars show the range of the duplicate bioassays.
##### Fig. 5
Thalassiosira and Skelotonema blooming in the ocean bioassay for BNR1 effluent (day 4), May 2011: X200 magnification phase contrast microscopy.
##### Fig. 6
Bloom of different phytoplankton community in the LIS ocean bioassay: whole effluent of CAS (left column), BNR 1 (middle), and BNR 2 (right). The bioassay set for CAS effluent showed bloom of diatom Cyclotella on day 7 and green algae on day 12. The bioassay set for BNR 1 effluent on day 7 and day 10 showed growth and death of several species of diatoms including Pinnularia, Skelotonema, and Thalassiosira. The bioassay set for BNR 2 effluent showed growth of diatom Chaetoceros on day 7 and their bloom on day 11. Images were taken from phase contrast microscopy: the magnification for each image is included within the figure. The same LIS ocean water was used for incubating these three different wastewater effluents in May 2011.
##### Table 1
Concentration and Composition of Nitrogen in Cas Effluent, Bnr Effluent, and Receiving Waters for the December Bioassay
DTN (mg/L) NO3-N (mg/L) NO2-N (mg/L) NH4-N (mg/L) DON (mg/L) DON/DTN (%) Soluble Protein-N (mg/L)
CAS 16.97 0.25 BDL 15.44 1.27 7.5 0.79
BNR1 2.23 0.50 0.28 0.80 0.66 29.4 0.48
CT River 0.32 0.28 BDL BDL 0.04 12.0 0.45
LIS 0.31
[i] BDL) below detection limit
[ii] -) not measured
##### Table 2
Results from the December River and Ocean Water Bioassay
CAS filtered effluent BNR1 filtered effluent
River Maximum VSS produced (mg/L) 40 16
N consumed (mg/L-N) 6.6 1.1
Yield (mg VSS/mg N) 6 14
Ocean Maximum VSS produced (mg/L) 32 32
N consumed (mg/L-N) 5.8 0.9
Yield (mg VSS/mg N) 6 36
##### Table 3
Concentration and Composition of Nitrogen in CAS Effluent, Two BNR Effluents, and Receiving Waters for the March Bioassay
DTN (mg/L) NO3-N (mg/L) NO2-N (mg/L) NH4-N (mg/L) DON (mg/L) DON/DTN (%) Soluble Protein-N (mg/L)
CAS 17.75 0.18 0.08 16.29 1.20 6.8 0.54
BNR1 2.61 0.68 0.15 0.79 0.99 38.0 0.28
BNR2 12.57 9.23 0.11 0.35 2.88 22.9 0.42
CT River 0.55 0.32 BDL BDL 0.24 42.8 0.04
LIS 0.19
[i] BDL) below detection limit
[ii] -) not measured
##### Table 4
Results from the March Bioassay
CAS whole effluent BNR1 whole effluent BNR2 whole effluent
Ocean Maximum VSS produced (mg/L) 80 75 141
N consumed (mg/L-N) 3.3 0.7 5.6
Yield (mg VSS/mg N) 25 101 25
##### Table 5
Concentration and Composition of Nitrogen in CAS Effluent, Two BNR Effluents, and Receiving Waters for the May Bioassay
DTN (mg/L) NO3-N (mg/L) NO2-N (mg/L) NH4-N (mg/L) DON (mg/L) DON/DTN (%) Soluble Protein-N (mg/L)
CAS 16.40 0.25 0.377 13.85 1.93 11.8 0.76
BNR1 3.22 0.09 0.06 2.43 0.64 19.8 0.28
BNR2 3.60 1.40 0.17 0.65 1.37 38.1 0.31
LIS 0.43
[i] BDL) below detection limit
[ii] -) not measured
##### Table 6
Results from the May Bioassay
CAS whole effluent BNR1 whole effluent BNR2 whole effluent
Ocean Maximum VSS produced (mg/L) 106 74 74
N consumed (mg/L-N) 2.5 0.6 0.9
Yield (mg VSS/mg N) 42 123 87
TOOLS
Full text via DOI
E-Mail
Print
Share:
METRICS
2 Crossref
2 Scopus
3,738 View | 2020-08-08 09:42:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3841674029827118, "perplexity": 10914.015616072735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00593.warc.gz"} |
http://www.shyal.com/ternary-conditionals | # ternary conditionals
🏠
Terneries in Python work just fine, however nested ternaries can lead to code that looks convoluted.
## Standard form conditional
1i, j, r = 2, 3, 0
2if i > 1:
3 if j == 3:
4 r = 4
5 else:
6 r = 5
7else:
8 r = 6
9
10print(r)
## Ternary
1i, j = 2, 3
2r = (4 if j == 3 else 5) if i > 1 else 5
3print(r)
As illustrated above, ternaries have some syntactic issues in python, imo:
• need disambiguating with parenthesis (this looks very confusing: 4 if j == 3 else 5 if i > 1 else 5)
• are backwards compared to c-based languages: r = (i > 1 ? j == 3 ? 4 : 5 : 6)
• most of the space is taken up by the if and else statements, not the actual logic
## Tuples
Our shortest version of the code above is:
1i, j = 2, 3
2r = (6, (5, 4)[j == 3])[i > 1]
3print(r)
We are converting the bool result of the conditional to an int (0 or 1) which we then use for tuple indexing. Please note we're using tuples rather than lists, since their immutability makes them presumably more efficient.
• conditions are always in the square brackets
• values (or sub-conditionals) are always in the paren
• no extra paren required for readability
• extremely concise
## When not to use
It's worth noting that, unlike with regular ternaries, each expression found as elements in a list or tuple declaration actually gets evaluated. This means you ought to use these only for simple ternary expressions. Using the results of functions is a no-no, since the function would actually get evaluated, whether the result is used or not.
## Chained comparisons
On the subject of unconventional conditionals, it's worth explicitly noting that chaining comparisons is fine. Write a comparison so that when x is between 5 and 10 inclusive, the program prints 'in bounds' else it prints 'out of bounds'.
### Solution
1for x in range(15):
2 print(f'{x}: in bounds' if 5 <= x <= 10 else f'{x}: out of bounds') | 2023-03-21 20:28:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4405237138271332, "perplexity": 2893.0618459334946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00308.warc.gz"} |
https://www.coursehero.com/file/p672j5qk/Z-0-I-sinh-1-1-1d-Z-d-Clearly-if-B-is-less-than-a-then-%CE%B5-2-One-can-easily-see/ | Z 0 I sinh 1 1 1d Z d Clearly if B is less than a then \u03b5 2 One can easily see
# Z 0 i sinh 1 1 1d z d clearly if b is less than a
This preview shows page 7 - 10 out of 11 pages.
Z=0Isinh-1(-11)dZd).Clearly, if˜Bis less thanathenε >2.One can easily see that ifπ00is naturally Euclid thenkFk ≤Δ.7
Trivially, ˜xis pseudo-globally onto and Kummer. In contrast,his closed, globally quasi-canonical, almostsurely pseudo-Eratosthenes and generic. By a well-known result of Levi-Civita [10], ifNis right-intrinsicthenι=U. This is a contradiction.Lemma 6.4.Letλ(e)>Φbe arbitrary. Letκ < e. Then11|W|-7.Proof.We show the contrapositive. Clearly, the Riemann hypothesis holds. Next,Lh,fis not bounded bym00.LetX=˜Θ be arbitrary. By an approximation argument, ifwis comparable to ˆathen|Γ| ∈z. Clearly, ifˆHis sub-independent then|˜U| →Ξ.Letε6=2. By continuity, Ξ3h(-kνR,Ok, . . . ,- -1). By a standard argument, every non-universallypseudo-admissible, generic factor is compactly regular and quasi-isometric. Note that if the Riemann hy-pothesis holds thenπ∨ |ψ|=Mtr± · · · ∩y(q006, . . . ,M-3)=χC, Wua(C)· · · · ±¯Θ (l1,UΦ).Suppose we are given a graphU. As we have shown,χ(η)1. Therefore every left-Gaussian, canonicallyGalois algebra is pairwiseJ-abelian.Letebe an ultra-degenerate domain.We observe that every hyper-almost surely quasi-characteristic,Galileo, hyper-Cauchy homomorphism is countable, essentially composite, right-everywhere partial and freelyEuclidean. By Galois’s theorem, if Tate’s criterion applies then|q0|=ρI,g. Note that ifK(J)IJthenSerre’s conjecture is false in the context of Maxwell elements. On the other hand, if Minkowski’s conditionis satisfied then every anti-pointwise projective vector equipped with an Einstein, sub-Lobachevsky, almostsurely quasi-affine monoid is surjective.By existence, every freely intrinsic, almost surely separable isometry is totally isometric and differentiable.Clearly, there exists a parabolic and geometric isomorphism. In contrast,v(ε)> m00. Obviously, ifpis almostpartial and convex thentan-1∞ -˜ζ(ny,q)a00(1)i¯F=0[y=1O(kM0k5,1)-¯ζ1q, . . . , e+v[E=πˆI(γ, ‘-5)log (κ0)(∅∅:˜Ω>¯Fb(20,2-2)).Moreover, ifis totally anti-commutative, isometric and contravariant thenS >|b|.Clearly,N ≡J.Trivially,1N≥ J(z-4, . . . ,-2). Moreover,Γ(G-1,Ψ)IAEA(m)(2, . . . ,MU,D(s)7)dΔ. 8
#### You've reached the end of your free preview.
Want to read all 11 pages? | 2021-01-16 03:47:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575058817863464, "perplexity": 14077.598291757804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00245.warc.gz"} |
https://support.bioconductor.org/p/13829/ | Question: exclude flagged spots in limma
0
13.2 years ago by
Yolande Tra160
Yolande Tra160 wrote:
An embedded and charset-unspecified text was scrubbed... Name: not available Url: https://stat.ethz.ch/pipermail/bioconductor/attachments/20060802/ ef2691f0/attachment.pl
• 139 views
modified 13.2 years ago • written 13.2 years ago by Yolande Tra160
Answer: exclude flagged spots in limma
0
13.2 years ago by
Yolande Tra160
Yolande Tra160 wrote:
An embedded and charset-unspecified text was scrubbed... Name: not available Url: https://stat.ethz.ch/pipermail/bioconductor/attachments/20060803/ e1c1710a/attachment.pl
Quoting Yolande Tra <yvtsma at="" rit.edu="">: > Hi Jose, > > Thank you for your reply. One problem is that value of 1 was assigned > for bad spots and 0 for good ones. Is there a way to switch these > values? > > Yolande Of course you can switch them. If you already made a matrix from your flags (oldmatrix), and contain ONLY the value 0 or 1, and you want to switch them over... just do this: newmatrix<-abs(oldmatrix-1) done. If you have other values then you may want to use the 'if else' clauses within a loop ('for') checking every element of the matrix, and then you can specify what values to substitute for... if you have a problem with this just shout. Jose -- Dr. Jose I. de las Heras Email: J.delasHeras at ed.ac.uk The Wellcome Trust Centre for Cell Biology Phone: +44 (0)131 6513374 Institute for Cell & Molecular Biology Fax: +44 (0)131 6507360 Swann Building, Mayfield Road University of Edinburgh Edinburgh EH9 3JR UK
Answer: exclude flagged spots in limma
0
13.2 years ago by
United Kingdom
J.delasHeras@ed.ac.uk1.9k wrote:
Quoting Yolande Tra <yvtsma at="" rit.edu="">: > Hi Gordon, > > I have read some of your answers regarding this issue but it does not > answer mine. I have 5 separate image output files from Scanalyze (not > one of the output listed in Limma). For each file, one of the column > named FLAG contains the flagged values of 1 if bad and 0 if good. Can > you please help me, how can I exclude these spots from the analysis. > > The following commands helped me to read each file saved ast text > (tab delimiter) >> filenames <- c("gp1.dat","gp2.dat","gp3.dat","gp4.dat","gp5.dat") >> RG <- read.maimages(filenames,annotation="My_spot_labels", > columns=list(Rf="CH1I",Gf="CH2I",Rb="CH1B",Gb="CH2B")) > These are the notation in Scanalyze. > > I have used one function in the archive and got the following error > > mywtfun <- function(exclude.flags=c(1,2,3)) function(obj) 1-(obj$Flag %in% > + exclude.flags) >> RG <- read.maimages(filenames,annotation="My_spot_labels", >> columns=list(Rf="CH1I",Gf="CH2I",Rb="CH1B",Gb="CH2B"), >> wt.fun=mywtfun(c(1))) > Error in "[<-"(*tmp*, , i, value = numeric(0)) : > nothing to replace with > > Thank you so much for your help. > Yolande Hi Yolande, you can add "manually" a component$weights to your RG object. This is merely a matrix with values between 0 and 1, with as many columns as there are slides, and as many rows as there are genes. So each column is a "flags" column for each slide. You can use the read.table function to read the whole data file (output from Scanalyze) for each slide, and then simply pick the column containing the flags, and use them to build the matrix. Then just assign RG$weights<- yourmatrix, and you're rolling... You can create your own flags from any other parameters you wish, as long as you end up with a number between 0 and 1, where 1 is full weight (good) and 0 is bad, and different degrees in between. I hope this helps. Jose -- Dr. Jose I. de las Heras Email: J.delasHeras at ed.ac.uk The Wellcome Trust Centre for Cell Biology Phone: +44 (0)131 6513374 Institute for Cell & Molecular Biology Fax: +44 (0)131 6507360 Swann Building, Mayfield Road University of Edinburgh Edinburgh EH9 3JR UK ADD COMMENTlink written 13.2 years ago by J.delasHeras@ed.ac.uk1.9k Answer: exclude flagged spots in limma 0 13.2 years ago by Marcus Davy680 Marcus Davy680 wrote: Here are a few of ways; > x [1] 0 1 0 0 1 > (!x)*1 [1] 1 0 1 1 0 > as.numeric(!x) [1] 1 0 1 1 0 > abs(x - 1) [1] 1 0 1 1 0 The logical operator "!" coerces the binary vector to logical and does a logical negation (NOT). A weight of 0 is equivalent to an NA, a weight between 0 and 1 will be used to do a weighted linear model using lmFit (internally lm.series etc) if the weights matrix is in your MAList and you specify to use it in the analysis. Marcus On 8/4/06 2:22 AM, "Yolande Tra" <yvtsma at="" rit.edu=""> wrote: > Hi Jose, > > Thank you for your reply. One problem is that value of 1 was assigned for bad > spots and 0 for good ones. Is there a way to switch these values? > > Yolande > > ________________________________ > > From: bioconductor-bounces at stat.math.ethz.ch on behalf of > J.delasHeras at ed.ac.uk > Sent: Thu 8/3/2006 7:43 AM > To: bioconductor at stat.math.ethz.ch > Subject: Re: [BioC] exclude flagged spots in limma > > > > Quoting Yolande Tra <yvtsma at="" rit.edu="">: > >> Hi Gordon, >> >> I have read some of your answers regarding this issue but it does not >> answer mine. I have 5 separate image output files from Scanalyze (not >> one of the output listed in Limma). For each file, one of the column >> named FLAG contains the flagged values of 1 if bad and 0 if good. Can >> you please help me, how can I exclude these spots from the analysis. >> >> The following commands helped me to read each file saved ast text >> (tab delimiter) >>> filenames <- c("gp1.dat","gp2.dat","gp3.dat","gp4.dat","gp5.dat") >>> RG <- read.maimages(filenames,annotation="My_spot_labels", >> columns=list(Rf="CH1I",Gf="CH2I",Rb="CH1B",Gb="CH2B")) >> These are the notation in Scanalyze. >> >> I have used one function in the archive and got the following error >>> mywtfun <- function(exclude.flags=c(1,2,3)) function(obj) 1-(obj$Flag %in% >> + exclude.flags) >>> RG <- read.maimages(filenames,annotation="My_spot_labels", >>> columns=list(Rf="CH1I",Gf="CH2I",Rb="CH1B",Gb="CH2B"), >>> wt.fun=mywtfun(c(1))) >> Error in "[<-"(*tmp*, , i, value = numeric(0)) : >> nothing to replace with >> >> Thank you so much for your help. >> Yolande > > Hi Yolande, > > you can add "manually" a component $weights to your RG object. This is > merely a matrix with values between 0 and 1, with as many columns as > there are slides, and as many rows as there are genes. So each column > is a "flags" column for each slide. > > You can use the read.table function to read the whole data file (output > from Scanalyze) for each slide, and then simply pick the column > containing the flags, and use them to build the matrix. Then just > assign RG$weights<- yourmatrix, and you're rolling... > > You can create your own flags from any other parameters you wish, as > long as you end up with a number between 0 and 1, where 1 is full > weight (good) and 0 is bad, and different degrees in between. > > I hope this helps. > > Jose > > -- > Dr. Jose I. de las Heras Email: J.delasHeras at ed.ac.uk > The Wellcome Trust Centre for Cell Biology Phone: +44 (0)131 6513374 > Institute for Cell & Molecular Biology Fax: +44 (0)131 6507360 > Swann Building, Mayfield Road > University of Edinburgh > Edinburgh EH9 3JR > UK > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: > http://news.gmane.org/gmane.science.biology.informatics.conductor > > > > [[alternative HTML version deleted]] > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: > http://news.gmane.org/gmane.science.biology.informatics.conductor ______________________________________________________ The contents of this e-mail are privileged and/or confidenti...{{dropped}}
Answer: exclude flagged spots in limma
0
13.2 years ago by
Yolande Tra160
Yolande Tra160 wrote:
An embedded and charset-unspecified text was scrubbed... Name: not available Url: https://stat.ethz.ch/pipermail/bioconductor/attachments/20060804/ 551ab893/attachment.pl | 2019-10-21 03:37:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46092545986175537, "perplexity": 6730.996014534079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00412.warc.gz"} |
http://math.gatech.edu/seminars-and-colloquia-by-series?page=3 | ## Seminars and Colloquia by Series
Monday, April 22, 2019 - 13:55 , Location: Skiles 005 , , University of South Carolina , , Organizer: Wenjing Liao
The talk presents an extension for high dimensions of an idea from a recent result concerning near optimal adaptive finite element methods (AFEM). The usual adaptive strategy for finding conforming partitions in AFEM is ”mark → subdivide → complete”. In this strategy any element can be marked for subdivision but since the resulting partition often contains hanging nodes, additional elements have to be subdivided in the completion step to get a conforming partition. This process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells. This motivated us [B., Fierro, Veeser, in preparation] to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step is necessary. We also proved that this procedure is near best in terms of both error of approximation and complexity. This result is formulated in terms of tree approximations and opens the possibility to design similar algorithms in high dimensions using sparse occupancy trees introduced in [B., Dahmen, Lamby, 2011]. The talk describes the framework of approximating high dimensional data using conforming sparse occupancy trees.
Monday, April 22, 2019 - 12:50 , Location: Skiles 005 , Joe Kileel , Princeton University , , Organizer: Justin Chen
This talk will be about polynomial decompositions that are relevant in machine learning. I will start with the well-known low-rank symmetric tensor decomposition, and present a simple new algorithm with local convergence guarantees, which seems to handily outperform the state-of-the-art in experiments. Next I will consider a particular generalization of symmetric tensor decomposition, and apply this to estimate subspace arrangements from very many, very noisy samples (a regime in which current subspace clustering algorithms break down). Finally I will switch gears and discuss representability of polynomials by deep neural networks with polynomial activations. The various polynomial decompositions in this talk motivate questions in commutative algebra, computational algebraic geometry and optimization. The first part of this talk is joint with Emmanuel Abbe, Tamir Bendory, Joao Pereira and Amit Singer, while the latter part is joint with Matthew Trager.
Friday, April 19, 2019 - 16:00 , Location: Skiles 005 , Pavel Svetlichnyy , School of Physics, GaTeach , , Organizer: Federico Bonetto
I will talk about a conjecture that in Gibbs states of one-dimensional spin chains with short-ranged gapped Hamiltonians the quantum conditional mutual information (QCMI) between the parts of the chain decays exponentially with the length of separation between said parts. The smallness of QCMI enables efficient representation of these states as tensor networks, which allows their efficient construction and fast computation of global quantities, such as entropy. I will present the known partial results on the way of proving of the conjecture and discuss the probable approaches to the proof and the obstacles that are encountered.
Friday, April 19, 2019 - 14:00 , Location: Skiles 006 , Arash Yavari and Fabio Sozio, School of Civil and Environmental Engineering , Georgia Tech , Organizer: Igor Belegradek
We formulate a geometric nonlinear theory of the mechanics of accretion. In this theory the material manifold of an accreting body is represented by a time-dependent Riemannian manifold with a time-independent metric that at each point depends on the state of deformation at that point at its time of attachment to the body, and on the way the new material isadded to the body. We study the incompatibilities induced by accretion through the analysis of the material metric and its curvature in relation to the foliated structure of the accreted body. Balance laws are discussed and the initial-boundary value problem of accretion is formulated. The particular cases where the growth surface is either fixed or traction-free are studied and some analytical results are provided. We numerically solve several accretion problems and calculate the residual stresses in nonlinear elastic bodies induced from accretion.
Friday, April 19, 2019 - 12:00 , Location: Skiles 006 , Marc Härkönen , Georgia Tech , , Organizer:
Thursday, April 18, 2019 - 15:05 , Location: Skiles 006 , Nizar Demni , University of Marseille , Organizer: Christian Houdre
Thursday, April 18, 2019 - 15:00 , Location: Skiles 005 , David Galvin , University of Notre Dam , Organizer: Xingxing Yu
To
any finite real sequence, we can associate a permutation $\pi$, via:
$\pi(k)$ is the index of the $k$th smallest element of the sequence.
This association was introduced in a 1987 paper
of Alavi, Malde, Schwenk and Erd\H{o}s, where they used it to study the
possible patterns of rises and falls that can occur in the matching
sequence of a graph (the sequence whose $k$th term is the number of
matchings of size $k$), and in the independent set
sequence.
The
main result of their paper was that {\em every} permutation can arise
as the independent set permutation'' of some graph. They left open the
following extremal question: for each $n$, what is
the smallest order $m$ such that every permutation of $[n]$ can be
realized as the independent set permutation of some graph of order at
most $m$?
We
answer this question. We also improve Alavi et al.'s upper bound on the
number of permutations that can be realized as the matching permutation
of some graph. There are still many open questions
in this area.
This is joint work with T. Ball, K. Hyry and K. Weingartner, all at Notre Dame.
Wednesday, April 17, 2019 - 16:30 , Location: Skiles 006 , Michail Sarantis , Georgia Tech , Organizer: Xingxing Yu
The well known Erdos-Hajnal Conjecture states that every graph has the Erdos-Hajnal (EH) property. That is, for every $H$, there exists a $c=c(H)>0$ such that every graph $G$ with no induced copy of $H$ has the property $hom(G):=max\{\alpha(G),\omega(G)\}\geq |V(G)|^{c}$. Let $H,J$ be subdivisions of caterpillar graphs. Liebenau, Pilipczuk, Seymour and Spirkl proved that the EH property holds if we forbid both $H$ and $\overline{J}.$ We will discuss the proof of this result.
Wednesday, April 17, 2019 - 15:00 , Location: Skiles 006 , , Georgia Tech , , Organizer: Galyna Livshyts
We discuss the asymptotic value of the maximal perimeter of a convex set in an n-dimensional space with respect to certain classes of measures. Firstly, we derive a lower bound for this quantity for a large class of probability distributions; the lower bound depends on the moments only. This lower bound is sharp in the case of the Gaussian measure (as was shown by Nazarov in 2001), and, more generally, in the case of rotation invariant log-concave measures (as was shown by myself in 2014). We discuss another class of measures for which this bound is sharp. For isotropic log-concave measures, the value of the lower bound is at least n^{1/8}.
In addition, we show a uniform upper bound of Cn||f||^{1/n}_{\infty} for all log-concave measures in a special position, which is attained for the uniform distribution on the cube. We further estimate the maximal perimeter of isotropic log-concave measures by n^2.
Wednesday, April 17, 2019 - 14:00 , Location: Skiles 006 , Sudipta Kolay , Georgia Tech , Organizer: Sudipta Kolay
We will see some instances of swindles in mathematics, primarily focusing on some in geometric topology due to Barry Mazur. | 2019-06-24 15:52:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8207224011421204, "perplexity": 1177.446126254067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999615.68/warc/CC-MAIN-20190624150939-20190624172939-00212.warc.gz"} |
https://stackoverflow.com/questions/5165384/failed-to-load-viewstate-the-control-tree-into-which-viewstate-is-being-loaded | # Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate
I'm currently working on a dynamic core for several webprojects. It has a core that uses a treeview and a menu. And then for each specific projekt it loads several different wuc into a maincontent. Some business projects use business related wucs while others uses different ones. So the span of wuc's is really big.
Now to my problem, whenever a user press a menuitem or a treeitem it loads a wuc to the maincontent linked to that object.
But I'm having some viewstate errors and i've been looking around for 2 days now and none of the solutions explained are working for my projekt.
All my wuc has to have viewstate enabled.
Cycle is ->
Page(Control A) does postback with variable to change control to ControlB in wucPanel(UpdatePanel). OnLoad LoadRequested Wuc.
Current code is
protected void Load_Page(object sender, EventArgs e)
{
//Code to decide which wuc to load.
}
I've tried several fixes like adding diffrent ids to the wuc, but this either disabels the internal functions of control like handlers etc or generates the same viewstate error.
One solution i found was to load ControlA and then just removing it and then load ControlB. But this disabled the scripts for my 3rd party controller (Telerik).
I've also read about having diffrent PlaceHolders for each typof but since i expect havign up to 50 diffrent Controls I don't feel this is gonna help me.
And moving from Page_Load -> Page_Init generated the same error.
Error:
Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request.
In your case Anders, you still need to add the old control to your page in the init method along with the new control that you now want to add. Keep a reference to this old control that you have just added in a class level variable. So something like
Control _oldControl = null;
protected void Init_Page(object sender, EventArgs e)
{
//Code to decide which wuc to load.
_oldControl = wucc as Control;
//Now add the new control here.
}
//override the LoadViewState method and remove the control from the control's collection once you page's viewstate has been loaded
{
ParentControl.ContentTemplateContainer.Controls.Remove(_oldControl);
}
Hope this helps. If it did, please check the checkbox next to this answer to accept it and vote it up if you like :)
• i can't get the system to go into my overrided method, dont i just put this in my aspx.cs ? Or is there some other step? – Anders Mar 3 '11 at 8:08
• Which override method are you talking about? The init page or the LoadViewState? – Nikhil Mar 3 '11 at 11:11
• the LoadViewState method. – Anders Mar 4 '11 at 9:22
• It will only be called on postbacks and not initial loads of the pages. Can you verify if it is being called on postbacks or not? – Nikhil Mar 4 '11 at 10:57
• although it isnt exacly as I wanted it I beleive this is the way to go. Still getting telerik errors tho which I'll have to work around somehow.. Thanks for help! – Anders Mar 8 '11 at 8:42
In order to avoid ViewState related errors please make absolutely sure that in Page_Init you create the same control tree that was created the previous time ViewState was saved i.e. the previous postback. Simple page life cycle:
Page Init - create the control tree - View State is loaded and applied here
Page Load - already loaded view state, you can do modifications to the control tree here - Save View State
Page PreRender
• Problem is i don't want to display the same control when the new life cycle starts. Since the user has requested a new control to be placed in the container where the old one used to be(in the UpdatePanel). – Anders Mar 2 '11 at 11:41
• Would also like to point out that i implemented a solution that was pointed out on the following blog: geekswithblogs.net/FrostRed/archive/2007/02/19/… but it didnt solve my issue, i somehow still get this error! – Anders Mar 2 '11 at 11:44
For what it’s worth I recently had the same problem.
My scenario was as follows.
A fixed panel of filters (dropdown lists and textboxes) which built a search SQL string. On submission of the search consequent results were displayed in an editable gridview beneath.
On editing the gridview I cold effectively change the state of a database record thus removing it from the gridview under the filters previously chosen. In some cases this resulted in no results being returned thus causing me to hide the gridview.
I then found that if I used the new state of the record in the filter and resubmitted the search that error sometimes occurred.
The problem I eventually found had nothing to do with enabled viewstates etc but simply that the empty gridview, though no longer visible (changed programmatically), had not been rebound to a null datasource.
This appeared to cause the conflict and the error.
So it appears as though in my case the viewstate issue arose from a non-visible gridview that contained non-refreshed data. | 2019-10-16 18:01:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22803188860416412, "perplexity": 2719.4346096295694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00548.warc.gz"} |
https://math.stackexchange.com/questions/3491301/every-alpha-in-bbb-rn-determines-a-linear-functional-in-a-finite-dimension | # Every $\alpha \in \Bbb R^n$ determines a linear functional in a finite dimensional vector space
Every $$n$$-tuple of scalars $$(\alpha_1, \dots, \alpha_n)$$ determines a linear functional on a finite dimensional normed space $$X$$.
What is the proof of this? Why does there exist a linear functional $$f : X \rightarrow \Bbb R$$ such that $$f(e_i) = \alpha_i$$ for each $$n$$-tuple? What are the functionals? How can linearity be preserved with every choice of $$\alpha$$'s?
I can see if $$x = \sum_i c_i e_i$$ then $$f(x) = \sum_i c_i f(e_i) = \sum_i c_i \alpha_i$$ where $$\alpha_i = f(e_i)$$, but I don't understand how this can be reversible to start with an $$n$$-tuple and find a functional.
• For any $v\in X$ define $f(v)$ as the scalar product between the tuple and $v$ where $v$ is represented as $v_1e_1+...v_ne_n$. Show that is linear and then apply it to each of the $e_i$. – John Douma Dec 29 '19 at 19:09
• @JohnDouma What is a scalar product between an $n$-tuple in $\Bbb R^n$ and a vector $\sum_i v_i e_i$ in a vector space $X$? – Oliver G Dec 29 '19 at 19:19
• $\alpha_1v_1+...+\alpha_nv_n$ – John Douma Dec 29 '19 at 22:03
Determining the values of a linear map $$T:V\rightarrow W$$ on the basis elements of $$V$$ determines $$T$$, by the definition of a basis for a vector space. Your case is when $$W=\mathbb{R}$$.
• Can you elaborate? I don't understand how this explains why any $\alpha$ has a linear map such that $T(e_j) = \alpha_j$. – Oliver G Dec 29 '19 at 18:59 | 2021-08-01 11:45:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313155174255371, "perplexity": 152.19866749303563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00717.warc.gz"} |
https://social.technet.microsoft.com/Forums/en-US/ada09dc8-6cfb-48ad-a3f6-2b550a2b6b6b/srp-blocking-all-exe-in-a-folder-is-not-applied-to-sublfolders?forum=winserverGP | # SRP blocking all exe in a folder is not applied to sublfolders
### Question
• Hello,
Following the Technet documentation saying about path rules :
"A path rule can specify a folder or fully qualified path to a program. When a path rule specifies a folder, it matches any program contained in that folder and any programs contained in subfolders. Software restriction policies support local and Uniform Naming Convention (UNC) paths." (https://technet.microsoft.com/en-us/library/cc786941(v=ws.10).aspx) or
"A path rule can specify a folder or fully qualified path to a program. When a path rule specifies a folder, it matches any program contained in that folder and any programs contained in subfolders. Both local and UNC paths are supported." (https://technet.microsoft.com/en-us/library/bb457006.aspx),
we have create a path rule disallowing c:\aa\*.exe
The problem is when applying this policy, an exe file located in c:\aa\bb\ can be executed.
To disallow that we have to had another path rule specifying c:\aa\bb\*.exe
But again, in subfolders you are allowed to execute any exe file.
Are we doing something wrong or is the documentation on technet wrong ?
Marc
Monday, January 11, 2016 1:54 PM
• > we have create a path rule disallowing c:\aa\*.exe
This is not a folder, but a file. Omit the "\*.exe" part.
Monday, January 11, 2016 4:16 PM
### All replies
• > we have create a path rule disallowing c:\aa\*.exe
This is not a folder, but a file. Omit the "\*.exe" part.
Monday, January 11, 2016 4:16 PM
• Oh, ok.
So if I only wanted to block exe but, let's say allow the mdb extension, in a folder and its subfolders, I would have to configure a path rule (only the folder path, no *.mdb extension) and then configure the Designated File Types (removing the mdb extension from the list).
Thank you very much, I hadn't understood like this at first :-)
Marc
Tuesday, January 12, 2016 7:46 AM | 2018-09-20 03:05:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505403399467468, "perplexity": 6288.907476297844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00535.warc.gz"} |
https://stackoverflow.com/questions/7927230/remove-directory-from-remote-repository-after-adding-them-to-gitignore | # Remove directory from remote repository after adding them to .gitignore
I committed and pushed some directory to github. After that, I altered the .gitignore file adding a directory that should be ignored. Everything works fine, but the (now ignored) directory stays on github.
How do I delete that directory from github and the repository history?
• – user456814 May 26 '14 at 14:23
The rules in your .gitignore file only apply to untracked files. Since the files under that directory were already committed in your repository, you have to unstage them, create a commit, and push that to GitHub:
git rm -r --cached some-directory
git commit -m 'Remove the now ignored directory "some-directory"'
git push origin master
You can't delete the file from your history without rewriting the history of your repository - you shouldn't do this if anyone else is working with your repository, or you're using it from multiple computers. If you still want to do that, you can use git filter-branch to rewrite the history - there is a helpful guide to that here.
Additionally, note the output from git rm -r --cached some-directory will be something like:
rm 'some-directory/product/cache/1/small_image/130x130/small_image.jpg'
rm 'some-directory/product/cache/1/small_image/135x/small_image.jpg'
rm 'some-directory/.htaccess'
rm 'some-directory/logo.jpg'
The rm is feedback from git about the repository; the files are still in the working directory.
• If someone else pulls, will the now ignored files be deleted for them or stay untouched? – Martin Konicek Sep 17 '12 at 17:19
• @Martin Konicek: if the user that's pulling those changes has no modifications to those files, then they will be removed. – Mark Longair Sep 29 '12 at 9:40
• @MarkLongair What does -r and --cached. Thanks. – Labanino Aug 28 '14 at 21:43
• @Labanino: -r means recursive, necessary if you're doing whole directories. --cached overrides git's normal behaviour of deleting them from the working directory and staging the deletion for committing, and makes git only operate on the staging area ready for committing. It's how you tell git you want to keep your local copies of the files. – entheh Apr 8 '15 at 18:16
• This works nice, but I had to first do: git reset name_of_file for what's described above to work – Edvard Haugland Sep 24 '17 at 10:53
I do this:
git rm --cached git ls-files -i --exclude-from=.gitignore
git commit -m 'Removed all files that are in the .gitignore'
git push origin master
Which will remove all the files/folders that are in your git ignore, saving you have to pick each one manually
This seems to have stopped working for me, I now do:
git rm -r --cached .
git commit -m 'Removed all files that are in the .gitignore'
git push origin master
• Thanks, this helped me a lot! If you're using windows powershell, you can do foreach ($i in iex 'git ls-files -i --exclude-from=.gitignore') { git rm --cached$i } – Matthew Mar 31 '13 at 3:08
• Tried your second approach, it removed all the files from my local git! – artnikpro Jan 24 '15 at 23:24
• I think you're supposed to navigate to the subdirectory and do it. Correct me if I'm wrong. – user4414636 May 27 '15 at 13:46
As per my Answer here: How to remove a directory from git repository?
### To remove folder/directory only from git repository and not from the local try 3 simple steps.
Steps to remove directory
git rm -r --cached FolderName
git commit -m "Removed folder from repository"
git push origin master
Steps to ignore that folder in next commits
To ignore that folder from next commits make one file in root named .gitignore and put that folders name into it. You can put as many as you want
.gitignore file will be look like this
/FolderName
• This is exactly what I was looking for. Thanks :) – Rohit Singh Sep 30 '19 at 2:41
• I am glad it helped @Rohit Singh – eirenaios Sep 30 '19 at 8:05
Blundell's first answer didn't work for me. However it showed me the right way. I have done the same thing like this:
> for i in git ls-files -i --exclude-from=.gitignore; do git rm --cached $i; done > git commit -m 'Removed all files that are in the .gitignore' > git push origin master I advise you to check the files to be deleted first by running the below statement: git ls-files -i --exclude-from=.gitignore I was using a default .gitignore file for visual studio and I noticed that it was removing all log and bin folders in the project which was not my intended action. Note: This solution works only with Github Desktop GUI. By using Github Desktop GUI it is very simple. 1. Move the folder onto another location (to out of the project folder) temporarily. 2. Edit your .gitignore file and remove the folder entry which would be remove master repository on the github page. 3. Commit and Sync the project folder. 4. Re-move the folder into the project folder 5. Re-edit .gitignore file. That's all. The answer from Blundell should work, but for some bizarre reason it didn't do with me. I had to pipe first the filenames outputted by the first command into a file and then loop through that file and delete that file one by one. git ls-files -i --exclude-from=.gitignore > to_remove.txt while read line; do git rm -r --cached "$line"; done < to_remove.txt
rm to_remove.txt
git commit -m 'Removed all files that are in the .gitignore'
git push origin master
This method applies the standard .gitignore behavior, and does not require manually specifying the files that need to be ignored.
Can't use --exclude-from=.gitignore anymore :/ - Here's the updated method:
General advice: start with a clean repo - everything committed, nothing pending in working directory or index, and make a backup!
#commit up-to-date .gitignore (if not already existing)
#this command must be run on each branch
git commit -m "Create .gitignore"
#apply standard git ignore behavior only to current index, not working directory (--cached)
#if this command returns nothing, ensure /.git/info/exclude AND/OR .gitignore exist
#this command must be run on each branch
git ls-files -z --ignored --exclude-standard | xargs -0 git rm --cached
#optionally add anything to the index that was previously ignored but now shouldn't be:
#commit again
#optionally use the --amend flag to merge this commit with the previous one instead of creating 2 commits.
git commit -m "re-applied modified .gitignore"
#other devs who pull after this commit is pushed will see the newly-.gitignored files DELETED
If you also need to purge the newly-ignored files from the branch's commit history or if you don't want the newly-ignored files to be deleted from future pulls, see this answer.
If you're working from PowerShell, try the following as a single command.
PS MyRepo> git filter-branch --force --index-filter
>> "git rm --cached --ignore-unmatch -r .\\\path\\\to\\\directory"
>> --prune-empty --tag-name-filter cat -- --all
Then, git push --force --all.
Documentation: https://git-scm.com/docs/git-filter-branch
• How is using git filter-branch different from using git rm? I was originally thinking I have to use a version of git filter-branch but most comments here recommend using git rm. From what I understand git rm only removes it from the current working directory and index. But if its added multiple commits before, it will still be in the repository. – alpha_989 Jan 21 '18 at 18:32
• Right. git rm will remove the file from the latest commit into the future. git filter-branch lets us remove it from the entire history. See also: help.github.com/articles/… – Shaun Luttin Jan 22 '18 at 0:52 | 2020-02-17 09:55:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17241138219833374, "perplexity": 6663.559107738522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00537.warc.gz"} |
https://bugzilla.mozilla.org/show_bug.cgi?id=881938 | Closed Opened 7 years ago Closed 6 years ago
# Dragging monocle up or down out of an edit box causes text selection to end
x86_64
Windows 8.1
RESOLVED FIXED
Firefox 30
## Attachments
### (2 files, 2 obsolete files)
819 bytes, text/html Details 10.77 KB, patch jimm : review+ roc : superreview- Details | Diff | Splinter Review
STR
1) Go to the test case in bug 854070
2) Tap in the edit box so that both selection monocles are in the same place
3) Drag the selection monocles up or down, out of the edit box
You'll notice that text selection has ended; the monocles have disappeared and so has the selection highlighting
Blocks: caret-sel
Summary: Dragging monocle up or down out of an edit box causes text selection to end → Defect - Dragging monocle up or down out of an edit box causes text selection to end
Whiteboard: feature=defect u=metro_firefox_user c=content_features p=0
Priority: -- → P3
Whiteboard: feature=defect u=metro_firefox_user c=content_features p=0 → [selection] feature=defect u=metro_firefox_user c=content_features p=0
Blocks: metrobacklog
No longer blocks: metrov2defect&change
Summary: Defect - Dragging monocle up or down out of an edit box causes text selection to end → Dragging monocle up or down out of an edit box causes text selection to end
Whiteboard: [selection] feature=defect u=metro_firefox_user c=content_features p=0 → [selection] [defect] p=0
Blocks: 957244
No longer blocks: caret-sel
OS: Windows 8 Metro → Windows 8.1
Whiteboard: [selection] [defect] p=0 → [selection] [defect] p=3
Priority: P3 → --
Priority: -- → P1
Target Milestone: --- → Firefox 30
Whiteboard: [selection] [defect] p=3 → [selection] p=3 r=ff30
Target Milestone: Firefox 30 → ---
Assignee: nobody → azasypkin
Status: NEW → ASSIGNED
Priority: P1 → P2
QA Contact: kamiljoz
Whiteboard: [selection] p=3 r=ff30 → [selection] p=3 s=it-30c-29a-28b.2 r=ff30
Hey Tim, are you still able to reproduce the issue?
Flags: needinfo?(tabraldes)
Yes I'm able to repro. I have to drag my finger a good distance from the edit box but doing that does indeed cause text selection to end.
I'm not sure if we have a bug on file for this separate issue, but dragging the left selection monocle past the left border of the edit box causes the text selection to become weird: At that point, any right-ward movement starts selecting text as if the movement started at the left edge of the edit box.
Flags: needinfo?(tabraldes)
Status update, the problem consists of two parts:
1. nsDOMWindowUtils.selectAtPoint can't select anything when point is located at the selectable frame's(input element in our case) boundary positions (start or end). This happens when caret monocle is transitioning to selection monocle;
2. When user pulls caret monocle up\down\right\left out of input, then selection is spilled over to the content nearby, or just fails if there is nothing selectable around.
Attached patch wip.diff (obsolete) — Splinter Review
More details on the first part: PeekBackwardAndForward tries to select content around target offset and fails if it can't get any selectable content from any side. Tried to address this issue through probing target frame for being start\end one. In case there is nothing to select in the specific direction, method selects content at the current offset. Though it's not clear for me whether PeekBackwardAndForward acts currently as expected. That would be great if we can discuss it via IRC.
To partially address the second part, when switching from caret to selection mode we can use caret coordination rather than current monocle ones. Selection still spills over, but it maybe fixed in a separate bug. It broke one contenteditable test, but would like to discuss it.
Attachment #8382229 - Flags: feedback?(jmathies)
You should push the frame changes to try. This code is also used on desktop for text selection via the mouse and keyboard, so we have to be careful about breaking behavior there.
Comment on attachment 8382229 [details] [diff] [review]
wip.diff
Review of attachment 8382229 [details] [diff] [review]:
-----------------------------------------------------------------
Can you give me some basic steps to reproduce how this changes current behavior so i can play with it?
::: browser/metro/base/content/contenthandlers/SelectionHandler.js
@@ +123,5 @@
> + if (this._targetIsEditable) {
> + let selection = this._getSelection().getRangeAt(0).
> + getBoundingClientRect();
> + aX = selection.left;
> + aY = selection.top;
so what is this doing here? looks like we change the coordinates of the tap to the upper left corner of the current selection? I don't understand why we do this.
Attachment #8382229 - Flags: feedback?(jmathies) → feedback+
Attached file testcase.html (obsolete) —
Test case to demonstrate current Firefox double-click-to-select behavior. Text is selected when user double clicks around the text and it's doesn't matter how far from the actual text user clicks.
Attached file testcase.html
Updated, to be viewable from bugzilla.
Attachment #8383396 - Attachment is obsolete: true
Attached patch patch.diffSplinter Review
1. Updated endFrame detection. Now it's detected in the same way as in nsSelection: http://mxr.mozilla.org/mozilla-central/source/layout/generic/nsSelection.cpp#885;
I really hate updating existing tests, but here it looks reasonable. As you may see metro/contenteditable (discussed on IRC) and dom/selectAtPoint(for no-selection case) were updated. Regarding to latter, please see testcase.html attached. Similar test case is used for selectAtPoint mochitests. Here selectAtPoint is called at "(text.left - 20, text.top - 20)" position and it's expected that no selection will be made. The assumption was based on old behavior of PeekBackwardAndForward - it fails if it can't find char either from backward position or forward one. Now it's changed and selection is actually made. To advocate new behavior I'm trying to compare it with double-click-to-select behavior of desktop firefox - it selects text when user makes double-click in the mentioned position(it uses PeekBackwardAndForward under the hood).
Patch pushed to try: https://tbpl.mozilla.org/?tree=Try&rev=1aa38dcf1575
Attachment #8382229 - Attachment is obsolete: true
Attachment #8383412 - Flags: review?(jmathies)
Hey Jim, any chance you can get the review in today?
Flags: needinfo?(jmathies)
(In reply to Marco Mucci [:MarcoM] from comment #10)
> Hey Jim, any chance you can get the review in today?
I will look at this today but fyi, it will need an sr from someone in layout before it lands. I'll set that once I'm finished testing.
Flags: needinfo?(jmathies)
Comment on attachment 8383412 [details] [diff] [review]
patch.diff
Review of attachment 8383412 [details] [diff] [review]:
-----------------------------------------------------------------
AFAICT this doesn't appear to impact desktop selection at all. I tested by clicking on selection at the beginning and end of frames in inputs with and without this patch applied and I didn't see any different. Which makes me wonder why this helps us in metro touch selection. With deskto the two calls at the end of PeekBackwardAndForward in HandleClick always trigger selection of the last character (if it's a period for example) or a word.
So I'm curious why this change is needed for metrofx. I'll try applying in metro land and debug the code a bit to see.
::: layout/generic/nsFrame.cpp
@@ +2932,5 @@
> + baseFrame->GetOffsets(frameStart, frameEnd);
> +
> + bool isFrameStart = frameStart == baseOffset;
> + bool isFrameEnd = frameEnd == baseOffset &&
> + !(frameStart == 0 && frameEnd == 0);
nit - !(!frameStart && !frameEnd)
Comment on attachment 8383412 [details] [diff] [review]
patch.diff
Review of attachment 8383412 [details] [diff] [review]:
-----------------------------------------------------------------
Confirmed this fixes the original bug, and afaict doesn't impact desktop selection So I think we can take this change. Seeking sr from a layout peer since this touches nsFrame code.
Attachment #8383412 - Flags: superreview?(roc)
Attachment #8383412 - Flags: review?(jmathies)
Attachment #8383412 - Flags: review+
Whiteboard: [selection] p=3 s=it-30c-29a-28b.2 r=ff30 → [selection] p=3 s=it-30c-29a-28b.3 r=ff30
Comment on attachment 8383412 [details] [diff] [review]
patch.diff
Review of attachment 8383412 [details] [diff] [review]:
-----------------------------------------------------------------
::: layout/generic/nsFrame.cpp
@@ +2932,5 @@
> + baseFrame->GetOffsets(frameStart, frameEnd);
> +
> + bool isFrameStart = frameStart == baseOffset;
> + bool isFrameEnd = frameEnd == baseOffset &&
> + !(frameStart == 0 && frameEnd == 0);
Why not just isFrameEnd = frameEnd == baseOffset && !isFrameStart?
@@ +2949,5 @@
> + // capture first available character into the selection.
> + if (isFrameStart) {
> + startpos.mStartOffset = baseOffset + 1;
> + startpos.mAmount = eSelectCharacter;
> + }
I do not understand why this change makes sense. Same for the other change below. PeekOffset is able to move between frames.
Attachment #8383412 - Flags: superreview?(roc) → superreview-
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #14)
> Comment on attachment 8383412 [details] [diff] [review]
> patch.diff
>
> Review of attachment 8383412 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> ::: layout/generic/nsFrame.cpp
> @@ +2932,5 @@
> > + baseFrame->GetOffsets(frameStart, frameEnd);
> > +
> > + bool isFrameStart = frameStart == baseOffset;
> > + bool isFrameEnd = frameEnd == baseOffset &&
> > + !(frameStart == 0 && frameEnd == 0);
>
> Why not just isFrameEnd = frameEnd == baseOffset && !isFrameStart?
>
> @@ +2949,5 @@
> > + // capture first available character into the selection.
> > + if (isFrameStart) {
> > + startpos.mStartOffset = baseOffset + 1;
> > + startpos.mAmount = eSelectCharacter;
> > + }
>
> I do not understand why this change makes sense. Same for the other change
> below. PeekOffset is able to move between frames.
Well, basically the idea was to prevent PeekBackwardAndForward to fail when it can't peek content from one of the directions, ex. current bug's case - if we call it with the point corresponding to start or end of text input. Now I see that checking for end\start frame covers only special case(that probably may be wrong, unfortunately can't find any info on how frames are organized for multiline inputs or contenteditable for example). Seems that more generic approach would be to check something like isPeekBackward\ForwardSucceeded and make decision about content to select basing only on succeeded peek.
Does it make sense?
Btw, just observation not related to the patch, name PeekBackwardAndForward looks confusing to me - the name says that it picks content backward and forward like PeekOffset does, but in both directions, while in reality it behaves differently - it doesn't let calling code know what it actually peeked + it selects peeked content. So it acts like "SelectBackwardAndForward" :)
Depends on: 858206
Seems that issue is resolved by path for bug 858206.
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 30
Whiteboard: [selection] p=3 s=it-30c-29a-28b.3 r=ff30 → [selection] p=3 s=it-30c-29a-28b.3 r=ff30 [qa-]
You need to log in before you can comment on or make changes to this bug. | 2019-12-15 11:37:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3075300455093384, "perplexity": 12799.391287626635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00386.warc.gz"} |
https://byjus.com/question-answer/oxygen-molecule-is-paramagnetic-because-no-of-bonding-electrons-no-of-antibonding-electronspresence-of-unpaired/ | Question
# Oxygen molecule is paramagnetic because:
A
no. of bonding electrons > no. of antibonding electrons
B
no. of bonding electrons < no. of antibonding electrons
C
no. of bonding electrons = no. of antibonding electrons
D
presence of unpaired electrons in molecular orbitals.
Solution
## The correct option is B presence of unpaired electrons in molecular orbitals.Electronic configuration of $$O_2$$:$$(\sigma 1 s^2) (\sigma * 1 s^2) (\sigma 2 s^2) (\sigma * 2 s^2) (\sigma 2 p_z^2) (\pi 2 p_x^2 = \pi 2 p_y^2) (\pi * 2 p_x^1 = \pi * 2 p_y^1)$$, $$(\pi * 2 p_x^1 = \pi * 2 p_y^1)$$ each has one electron or unpaired electron.So the correct option is $$[D]$$.Chemistry
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-27 18:35:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19972169399261475, "perplexity": 12250.350651292178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00162.warc.gz"} |
https://www.physicsforums.com/threads/why-study-susy.810749/ | # Why study SUSY?
1. Apr 26, 2015
### itssilva
Supersimmetry, by itself, is a neat, elegant concept; I've read somewhere else (think Griffiths' Introduction to Elementary Particles, if memory serves me right) that it allows the various running couplings of the Standard Model to converge to a single value at high enough energies; however, besides that, what are the theoretical/experimental motivations to study this complication of standard QT? I'm under the impression that this theory haven't been garnering much love from the non-string physics community, of late.
2. Apr 26, 2015
### Orodruin
Staff Emeritus
Supersymmetry is popular in large parts of the particle physics community, not only with string theorists. The main reason for this is that it would provide possible solutions to some problems with the Standard Model. For example, it would solve the hierarchy problem (assuming the SUSY scale is low enough) and it provides viable dark matter candidates.
3. Apr 27, 2015
### Ilja
One point of supersymmetry seems that the additional particles it introduces somehow "solve" the "problem" of a unification of couplings. If this is really a problem is a first question - it is, of course, a problem for GUTs, which unify all forces into some simple gauge group, which requires that at some fundamental scale of unbroken symmetry the interaction coupling has to be the same for all parts.
One can argue that it could be also a problem for an effective theory, which assumes a common critical length scale where the field theory breaks down into some sub-theory, say some "atomic ether" or so. In this case, one would reasonably expect that all the effects should have a similar order at this scale.
My question is a different one. Suppose one simply has some usual field theory, containing all what the SM contains but in different numbers - say, more fermions, more scalar fields, above with masses. Computing the corresponding functions for the couplings should be, I would guess, something already done in a quite general form, so that it could be sufficient to put in the numbers of the additional particles, their spin, their masses, I would guess also the representations of the SM gauge groups acting on them, to find out if in this theory the couplings unify or not.
Any suggestions where such things can be found?
4. Apr 27, 2015
### Orodruin
Staff Emeritus
You cannot simply add an arbitrary number of fermions to the SM without caring for anomaly cancellations. In the SM, the gauge couplings are such that the anomalies cancel generation by generation.
What you can do is to add extra Higgses. This does change the running behaviour of the gauge couplings and using an appropriate number of extra Higgs doublets (if I remember correctly, the required number was seven additional doublets, but do not quote me on that) you can get the couplings to unify.
5. Apr 27, 2015
### Ilja
Thanks, I know.
But I don't care much about this anomaly stuff, for a reason which is possibly completely wrong:
My understanding is that the problem they cause is non-renormalizability. But in this case, this would be a non-problem for an effective field theory with some explicit cutoff. There, the non-renormalizable terms would be those which descrease faster than the renormlizable ones in the large distance limit, so they would disappear automatically if the cutoff is sufficiently small, even if they would have comparable order at the critical distance. Roughly speaking, one could leave problems with non-renormalizability to the long distance limit. Or is this completely off, and there are other problems with them?
What I would like to add (for completely different reasons) is a single scalar field for each electroweak doublet. The triplets for quarks would be colored, those for leptons would not interact with gauge fields at all.
Now, naive counting gives equal numbers for bosons and fermions: 8+3+1 gauge bosons, 3 x (1+3) new scalars against the 3 x (1+3) electroweak doublets. So, it seems to make sense to look if some of the advantages of supersymmetry - whatever they are - would be present - by accident - in this theory too.
6. Apr 27, 2015
### Orodruin
Staff Emeritus
This is inconsistent. You cannot have an electroweak doublet that does not interact with gauge fields.
7. Apr 28, 2015
### Ilja
But I can have scalar fields which do not interact with gauge fields, and they can be associated with electroweak doublets.
This association would be a correspondence for numbers (there will be one scalar field for every electroweak doublet). Moreover, each scalar field borrows its gauge charges from one preferred component of "its" electroweak doublet. In the case of leptons, this preferred component will be the right-handed neutrino, for quarks the right-handed anti-down-type quark (EM charge 1/3).
8. Apr 28, 2015
### Orodruin
Staff Emeritus
Why dont you write down your intended Lagrangian? It will be more effective (haha) than trying to describe your idea in words.
9. Apr 28, 2015
### Ilja
I don't think this would simplify something. The Lagrangian of scalar fields interacting with gauge fields would be nothing, new, thus, only a misdirection of interest (one would think there is something interesting in this formula) and what matters would be anyway the description of the number of fields and their charges.
10. Apr 28, 2015
### Orodruin
Staff Emeritus
Exactly, and I am not getting a clear picture of exactly what you want to do, which fields you want to couple to the scalars and how. This would be much simpler if you just wrote down the (interaction) Lagrangian you had in mind.
11. Apr 28, 2015
### Ilja
If $$\mathcal{L}= \frac12 D_\mu \phi^*_{ga} D^\mu \phi_{ga} - \frac12 m^2_{ga} \phi^*\phi$$ with $$D_\mu = \partial_\mu + i A^b_\mu T^b$$ makes you happy. Here g is the generation index, a is from 0 (for leptons) to 3 (for the three quark colors), b goes over the SM gauge fields, and the $T^b$ describe the representation I have to describe in words anyway: It acts trivially on the $\phi_{g0}$ and by the standard three-dimensional representation of U(3) (obtained by factorizing out weak interactions $SU(2)_L$ and $\mathbb{Z}_3$ from the SM gauge group) on the three-dimensional space of the $\phi_{gi}$ with $i=1,2,3$, g fixed.
Last edited: Apr 28, 2015
12. Apr 28, 2015
### ChrisVer
That's true from my case since both Dark Matter scenarios (supersymmetric WIMPs) as well as the Hierarchy problem solutions of SUSY become less favorable as you go to larger energies. As already mentioned the hierarchy problem could be solved by SUSY if the scale of it was around the TeV scale. Obviously we haven't yet found anything in the LHC and the parameter space for the theory to exists gets tighter. On the other hand they always try to save SUSY models like MSSM, either by extending it or I don't know exactly , say that it might still be invisible at the LHC.
I still like SUSY but the normal things the theory could solve, have been cornered a lot.
The unification of the coupling constants is (for me) an aesthetic need of some people and I don't really like it (as a reason)
Check eg. this figure :
http://scienceblogs.com/startswithabang/files/2013/05/running_coupling.gif
The left is the SM and the right is the MSSM.
I don't personally find any reason to have the lines changing at only 1 given energy (at around 10^3 GeV ~ 1 TeV scale).
If you don't care about these problems and don't care about experiments (like string theorists) then you can allow for SUSY to exist at any energy.
13. Apr 28, 2015
### Orodruin
Staff Emeritus
So what are your Yukawa couplings?
If your fields transform non-trivially under any of the gauge groups, they are going to have gauge couplings so what is the point of having the covariant derivative there at all?
14. Apr 28, 2015
### Ilja
The coupling constants are the same as those of the quarks.
15. Apr 28, 2015
### Orodruin
Staff Emeritus
Which coupling constants? The Yukawa couplings are coupling constants between fermions and scalars. If you have gauge coupling constants your scalar fields interact with the gauge fields. Or are you saying that you want to have a scalar field that does not have any quantum number? What is going to stop this field from coupling to every fermion in the SM?
16. Apr 28, 2015
### Ilja
Yes, these scalar fields are not intended as Higgs fields which have to give any fermions any mass.
17. Apr 28, 2015
### Orodruin
Staff Emeritus
It is not a matter of having to give fermions mass, it is a matter of which terms are allowed in your Lagrangian. If they are allowed, they will be there or you need to explain why they are not. If you have a singlet scalar $\phi$, what stops me from writing down the interaction term $\phi \bar q_R q_R$?
18. Apr 28, 2015
### Ilja
So ok, you are free to add whatever you want or think it is somehow unpreventable. I have seen no justification or necessity to write down such terms, so I do not write them down by Ockham's razor, that's all.
edit: Thinking about this a little bit more, such a Yukawa term, which would connect the scalar field with the corresponding electroweak doublet, would be quite natural.
Last edited: Apr 28, 2015
19. Apr 28, 2015
### ChrisVer
Whatever you are allowed to... If you are allowed to write something, then you have to write it, otherwise explain how this thing went on missing.
If you could discard terms like this in the Lagrangian, then there would be no strong-CP-problem and there would be no "bad" proton decays in several other theories.
20. Apr 28, 2015
### Orodruin
Staff Emeritus
Ockham's razor would actually go in the other direction. If you do not prohibit the coupling using some symmetry, it is generally going to be generated by running at one scale even if it is zero at another scale. | 2017-10-20 07:35:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7261670231819153, "perplexity": 796.0131047464273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083631-00029.warc.gz"} |
https://zbmath.org/?q=an:1189.60170 | ## Limit theorems for vertex-reinforced jump processes on regular trees.(English)Zbl 1189.60170
Summary: Consider a vertex-reinforced jump process defined on a regular tree, where each vertex has exactly b children, with b $$\geq 3$$. We prove the strong law of large numbers and the central limit theorem for the distance of the process from the root. Notice that it is still unknown if vertex-reinforced jump process is transient on the binary tree.
### MSC:
60K35 Interacting random processes; statistical mechanics type models; percolation theory 60F15 Strong limit theorems 60F05 Central limit and other weak theorems
Full Text: | 2023-03-29 20:27:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527363300323486, "perplexity": 1158.8074631085715}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00734.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2007_v44n2_359 | INTUITIONISTIC FUZZY SETS IN GAMMA-SEMIGROUPS
Title & Authors
INTUITIONISTIC FUZZY SETS IN GAMMA-SEMIGROUPS
Uckun, Mustafa; Ozturk, Mehmet Ali; Jun, Young-Bae;
Abstract
We consider the intuitionistic fuzzification of the concept of several $\small{{\Gamma}}$-ideals in a $\small{{\Gamma}}$-semigroup S, and investigate some properties of such $\small{{\Gamma}}$-ideals.
Keywords
(regular, simple) $\small{{\Gamma}}$-semigroup$\small{{\Gamma}}$-subsemigroup;(interior) $\small{{\Gamma}}$-ideals$\small{{\Gamma}}$;bi-ideal;intuitionistic fuzzy $\small{{\Gamma}-subsemigroup}$;intuitionistic fuzzy (interior) $\small{{\Gamma}}$-ideals;intuitionistic fuzzy $\small{{\Gamma}}$-bi-ideal;intuitionistic fuzzy left simple $\small{{\Gamma}}$-semigroup;
Language
English
Cited by
1.
Intuitionistic (λ,μ)-fuzzy sets in Γ-semigroups, Journal of Inequalities and Applications, 2013, 2013, 1, 107
2.
A study on fuzzy interior ideals of Γ-semigroups, Computers & Mathematics with Applications, 2010, 60, 1, 90
3.
A novel approach towards fuzzy Γ-ideals in ordered Γ-semigroups, Indian Journal of Pure and Applied Mathematics, 2014, 45, 3, 343
4.
A novel intuitionistic fuzzy clustering method for geo-demographic analysis, Expert Systems with Applications, 2012, 39, 10, 9848
5.
Structure of Intuitionistic Fuzzy Sets inΓ-Semihyperrings, Abstract and Applied Analysis, 2013, 2013, 1
References
1.
K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems 20 (1986), no. 1, 87-96
2.
K. T. Atanassov, New operations defined over the intuitionistic fuzzy sets, Fuzzy Sets and Systems 61 (1994), no. 2, 137-142
3.
K. H. Kim and Y. B. Jun, Intuitionistic fuzzy interior ideals of semigroups, Int. J. Math. Math. Sci. 27 (2001), no. 5, 261-267
4.
K. H. Kim, Intuitionistic fuzzy ideals of semigroups, Indian J. Pure Appl. Math. 33 (2002), no. 4, 443-449
5.
N. Kuroki, On fuzzy ideals and fuzzy bi-ideals in semigroups, Fuzzy Sets and Systems 5 (1981), no. 2, 203-215
6.
N. Kuroki, Fuzzy semiprime ideals in semigroups, Fuzzy Sets and Systems 8 (1982), no. 1, 71-79
7.
N. K. Saha, On \${\Gamma}-semigroup\$ II, Bull. Calcutta Math. Soc. 79 (1987), no. 6, 331-335
8.
N. K. Saha, On \${\Delta}-semigroup\$ III, Bull. Calcutta Math. Soc. 80 (1988), no. 1, 1-12
9.
M. K. Sen and N. K. Saha, On \${\Delta}-semigroup\$ I, Bull. Calcutta Math. Soc. 78 (1986), no. 3, 180-186
10.
L. A. Zadeh, Fuzzy sets, Information and Control 8 (1965), 338-353 | 2018-04-23 19:13:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288931250572205, "perplexity": 4364.311388171922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946165.56/warc/CC-MAIN-20180423184427-20180423204427-00169.warc.gz"} |
http://www.helpteaching.com/questions/Science/Grade_9 | # Ninth Grade (Grade 9) Science Questions
Create printable tests and worksheets from Grade 9 Science questions. Select questions to add to a test using the checkbox above each question. Remember to click the add selected questions to a test button before moving to another page.
Show Science questions in All Grades.
1 2 3 4 ... 181
Grade 9 :: Biochemistry by PMISIAS
Cellular respiration is
1. The process of breaking down chemical energy stored in bonds into energy that can be used by the cell
2. The process of creating food for the cell
3. The process of combining oxygen and water to produce another substance to be transported
4. The process of making proteins
Grade 9 :: Cell Structure and Function by PMISIAS
A student views cells from several different prokaryotic and eukaryotic organisms under a high-powered microscope. Which of the following statements describes how the prokaryotic cells appear different from the eukaryotic cells?
1. The prokaryotic cells do not have a nucleus
2. The prokaryotic cell are much larger
3. The prokaryotic cells have mitochondria
4. The prokaryotic cells have a less distinct shape
Grade 9 :: Scientific Method by kurtman
Choose the answer that best displays the use of an independent and dependent variable.
1. Frank sets the temperature of a room to 31 degrees Celsius. He will come back later to record the height of the petunias.
2. Mike creates three different solutions. He pours each one on 3 different plants and will observe which one grows faster.
3. Emily observes a change in color of a metal, she records the color in her data section.
4. Daniel changes his data after realizing it does not align with his hypothesis. He adds extra information to his conclusion which he made up.
Grade 9 :: Properties of Matter by lisamarie137
Define volume.
1. the amount of space occupied by an object
2. the measurement of the quantity of matter in an object
3. the mass per unit volume of a material
4. the mass per unit density of a material
Grade 9 :: Cell Structure and Function by PMISIAS
Grade 9 :: Properties of Matter by lisamarie137
What is the formula used to find the mass of an object?
1. $M=DV$
2. $M=D/V$
3. $M=M/V$
4. $M=VM$
Grade 9 :: Scientific Method by jlovering
Victoria grows the same bacteria in 20 petri dishes. She places 10 of the dishes in a container with a normal atmosphere. The remaining dishes she places in a container in which the oxygen level is double the normal level. She labels the first group "A" and the second group "B". Which of the following best describes the groups?
1. Group A is the control group; Group B is the experimental group
2. Group A is the experimental group; Group B is the control group
3. Group A is the hypothesis; Group B is the theory
4. Group A is the variable; Group B is the observation
Grade 9 :: Biochemistry by PMISIAS
Which of the following accurately describes the products of photosynthesis?
1. oxygen and carbon dioxide
2. glucose and oxygen
3. carbon dioxide and water
4. water and glucose
Grade 9 :: Biochemistry by PMISIAS
Which of the following accurately describes the products of cellular respiration?
1. carbon dioxide and water
2. oxygen and carbon dioxide
3. water and glucose
4. glucose and oxygen
Grade 9 :: Biochemistry by PMISIAS
Which of the following most accurately describes the purpose of cellular respiration?
1. to breakdown glucose into energy that can be used for cellular functions
2. to breakdown usable energy into smaller parts
3. to breakdown different types of substances to be used to build other parts of the cell
4. all of the above
Grade 9 :: Properties of Matter by lisamarie137
Define mass.
1. the amount of space occupied by an object
2. the measurement of the quantity of matter in an object
3. the mass per unit volume of a material
4. the mass per unit density of a material
Grade 9 :: Cell Structure and Function by PMISIAS
Which of the following accurately describes the locations of photosynthesis and respiration in plant cells?
1. Both photosynthesis and respiration occur in the cytoplasm
2. Photosynthesis occurs in the chloroplast, while respiration occurs in the mitochondria
3. Photosynthesis occurs in the mitochondria, while respiration occurs in the chloroplast
4. Photosynthesis occurs in the chloroplast, but plant cells do not perform respiration
Grade 9 :: Atomic Structure by bruno246
Grade 9 :: Biochemistry by PMISIAS
Enzymes are critical for biological reactions to occur in an organism. Enzymes help chemical reactions occur because
1. They speed up reactions by adding more substrates
2. They slow reactions down by increasing the activation energy
3. They speed up reactions by lowering the activation energy
4. They slow down reactions by changing the overall temperature
Grade 9 :: Biochemistry by PMISIAS
Which of the following accurately describes the purpose of photosynthesis?
1. Release energy by breaking down glucose
2. Store energy in the form of carbon dioxide
3. Use sunlight energy to create sugars
4. Create water that can hydrate the body
Grade 9 :: Cell Structure and Function by Shannyne
What are all living things made of?
1. bones
2. cells
3. blood
4. teeth
Grade 9 :: Biochemistry by testmakingstudent
Grade 9 :: Biochemistry by PMISIAS
1 2 3 4 ... 181
You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges. | 2014-10-24 09:32:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32757797837257385, "perplexity": 3237.7387463664218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645676.15/warc/CC-MAIN-20141024030045-00208-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://questioncove.com/updates/502842dee4b0c8aae6079cf1 | OpenStudy (anonymous):
How do you find speed of parametrics? is it sqrt(x^2 + y^2) ?
5 years ago
OpenStudy (anonymous):
The usual kind of parametric equation is x = f(t) y = f(t) ??
5 years ago
OpenStudy (anonymous):
yeah
5 years ago
OpenStudy (anonymous):
so where is speed here?
5 years ago
OpenStudy (anonymous):
The usual idea is that t = 1 is "unit time" and then you have equations for x, y etc.
5 years ago | 2017-11-19 21:17:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945544421672821, "perplexity": 10342.409625647862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00239.warc.gz"} |
https://www.genetics.org/highwire/markup/396448/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed | Table 2 Comparison of average reliabilities of genomic predictions at different heritabilities for five different scenarios obtained with the deterministic formulas of VanRaden (2008) (rel_VR) and Daetwyler et al. (2008) (rel_D), using the estimated number of effective chromosome segments (Me)
ScenarioaRel_VRRel_D
0.6FREQ1221160.0020.003
0.6LD114580.0220.027
0.6HAP146270.0180.021
0.6CHR21390.1000.129
0.6FAM8370.3180.275
805b0.283
7774c0.039
0.1FREQ1221160.00040.0004
0.1LD114580.0040.005
0.1HAP146270.0030.004
0.1CHR21390.0210.024
0.1FAM8370.1040.059
805b0.062
7774c0.007
• a estimated based on the genomic and additive genetic relationship matrices (Equation 6).
• b estimated as (Goddard 2009).
• c estimated as (Hayes et al. 2009d). | 2019-07-23 15:51:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691058158874512, "perplexity": 13666.585565541602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00224.warc.gz"} |
http://mathhelpforum.com/algebra/10994-logarithmic-problems.html | # Math Help - logarithmic problems
1. ## logarithmic problems
I have been working on these problems and I must have a logic problem, could you put me in the right direction? Here is the problem and what I have done:
If A=2, B=3, C=5 I am supposed to evaluate the following:
ln(A^3/B^-3C^3)
ln(A^3)-ln(B^-2)-ln(C^3)
3lnA + 2lnB - 3lnC
3ln(2) + 2ln(3) - 3ln(5) = -5516476183
The back of my calc book tells me the answer is incorrect, could you let me know where I went wrong.
Thank you!!!!!!!!!!!!!!!!!
Keith Stevens
2. Hello, Keith!
Did you leave out part of the problem?
If $A=2,\:B=3,\: C=5$, evaluate: . $\ln\left(\frac{a^3}{b^{-3}c^3}\right)$
There is a difference between $A$ and $a$.
Is $A$ somehow related to $a$ ?
If not, it's a truly stooopid problem . . .
Here's another: .If $P = 2$ and $Q = 7$, evaluate $x + y$
3. Originally Posted by kcsteven
3ln(2) + 2ln(3) - 3ln(5) = -5516476183
What??? You DID mean: -0.5516476183, yes?
Originally Posted by kcsteven
If A=2, B=3, C=5 I am supposed to evaluate the following:
ln(A^3/B^-3C^3)
Your problem is a slight typo in the second term:
$ln \left ( \frac{A^3}{B^{-3}C^3} \right )$
$3ln(A) - (-3)ln(B) - 3ln(C) = 3ln(2) + 3ln(3) - 3ln(5) =\approx 0.546965$
-Dan | 2014-10-24 13:09:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458637237548828, "perplexity": 1206.4226470296471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00029-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/1103/how-does-ica-handle-inevitable-delays-in-signals | # How does ICA handle inevitable delays in signals?
I am currently reading up on and teaching myself ICA from a number of good sources. (Also see this post for past context). I have the basic jist down, but there is something I am not clear about.
For a scenario where multiple signals are impinging on multiple spatial sensors, (of course, with number of sensors >= number of signals), it is inevitable that for any one sensor, all the signals arriving to it will have different delays/phase-offsets associated with them, as compared to those arriving at a different sensor.
Now, as far as I know, the signal model for ICA is a simple mixing matrix, where the total energy arriving at any one sensor is modeled as nothing but a simple linear combination of all the other signals of interest. Every sensor has a different array of linear combination coefficients associated with it. So far so good.
What I do not understand, is that inevitably there are going to in fact be some delay/phase-offset among individual signals arriving at individual sensors that differ from one another. That is, $s_1(n)$ might arrive at $sensor_1$ at some time 0s, while that same $s_1(n)$ arrives at $sensor_2$ attenuated, but also at some delay or phase difference. The way I see it this is physically inevitable.
...How can it be that this is not modeled in the mixing matrix? It seems that delays will make a huge difference. No longer are we talking about simple linear combinations anymore. How does ICA handle this? Have I missed something here?
I should also add as an addendum, if indeed ICA cannot handle delays, then what applications does it find usefulness in? Clearly spatial ones with sensors are out!
Thanks
• I think ICA is meant for things in which there are no delays. I don't know why they always use an example of lots of people talking in a room, since that application doesn't actually work with ICA. Something like DUET is a better fit for this application. dsp.stackexchange.com/questions/812/… – endolith Jan 9 '12 at 14:47
• @endolith Thanks Endolith, I have included our previous exchange here as well as a link. That post spurred my interest and but further reading of my book didnt make it clearer. :-/ I will check out DUET. – Spacey Jan 9 '12 at 16:33
• @endolith One other thing - this sort of begs the question as to where exactly one is able to use ICA in practical applications. To me as it stands, it will be completely useless for any spatial application (where you have multiple sensors) for the delay reason. If this is the case, then where does ICA find fruitfulness? – Spacey Jan 9 '12 at 16:42
• @Mohammad Looking up the article "COMBINING TIME-DELAY ED DECORRELATION AND ICA: TOWARDS SOLVING THE COCKTAIL PARTY PROBLEM" might be of help. I guessing you are trying to do speaker separation. This problem might be found in the literature as multichannel blind deconvolution. I am also interested in the problem you have described above, if you want you can contact me at the email in my profile. – TwoSan Jan 10 '12 at 19:36
• @TwoSan Thanks, I will look you up, and I have also emailed you. – Spacey Jan 11 '12 at 22:14
## 1 Answer
One of the most successful uses of ICA has been in the study of electrophysiology (i.e. brain activity), principally EEG (Electroencephalography) and MEG (Magnetoencephalography). They are used to remove artefacts (such as electrical impulses caused by muscle movements (eye blinks etc)) without the need for reference channels. In this application the spatial separations between sensors is minute compared with the propagation speed of the waves, and as such the assumptions of ICA effectively hold.
For fMRI, which relies on blood flow in the brain, the temporal delay issue is more significant. One approach, taken in the paper Latency (in)sensitive ICA. Group independent component analysis of fMRI data in the temporal frequency domain by Calhoun et al (2003) attempted to solve this problem by making estimates of the time delay in each voxel, and then using this as prior information in a modified ICA. Maybe something like this could be applied in your domain?
• Thanks for your post tdc, that is interesting and makes sense - for an EEG, (a spatial application) the waveforms being measured are the electrical field strengths that are travelling at the speed of light (or close to it), over distances that are very small (across the head) relative to the waveforms' speed. – Spacey Jan 10 '12 at 21:01
• As far as my applications, I am looking to somehow use this (or some mutation as you have mentioned) for acoustic applications. I have 2 sensor arrays that I can use. On one of them, the distances between mics are tens of meters, so the delays are very significant. On the other sensor array, the distances between the mics are on the order of the wavelength, $1\lambda$, or $\frac{1}{2}\lambda$. Do you think for those types of distances on order of $\lambda$ pure ICA holds? If not, how 'small' of distances relative to wavelengths does one need for pure ICA to work? – Spacey Jan 10 '12 at 21:07
• If you take the speed of sound for a typical day to be 332 m/s and an example frequency of 111 Hz, that equates to wavelength of ~3m. If you have two sensors, one of which is 3m away from the source, and the other 4.5m away, the two signals will be completely out of phase. In this scenario I expect ICA to fail horribly. However if the two sensors are, say, 3m and 3.01m from the source, it would probably work. Just stating the separation of the sensors is not enough - you need to know how far the (typical) sources will be from the sensors, so that you can work out the relative temporal delay – tdc Jan 11 '12 at 12:41 | 2019-06-16 21:48:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6175841093063354, "perplexity": 636.6634282409213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00322.warc.gz"} |
https://www.techwhiff.com/issue/a-squirrel-is-sitting-in-a-tree-he-holds-an-acorn-6--512189 | # A squirrel is sitting in a tree. He holds an acorn 6 inches above his feet and drops it. The acorn lands in the head of a person walking by who is 37 inches below the feet of the squirrel. How far did the acorn travel?
###### Question:
A squirrel is sitting in a tree. He holds an acorn 6 inches above his feet and drops it. The acorn lands in the head of a person walking by who is 37 inches below the feet of the squirrel. How far did the acorn travel?
### What is the numerical solution of sec^2(-pi/4)?
What is the numerical solution of sec^2(-pi/4)?...
### What number should be in the blank in the sequence? 7; 17; 37; 77; ___ ; 317
What number should be in the blank in the sequence? 7; 17; 37; 77; ___ ; 317...
### Evaluate 9 ÷ 3[(18 − 6) − 22]. a: 0.188 b: 0.375 c: 24 d: 48
Evaluate 9 ÷ 3[(18 − 6) − 22]. a: 0.188 b: 0.375 c: 24 d: 48...
### In your opinion, what are some factors that contribute to the increase in obesity? What are some changes that you would suggest for people to make to their daily food and exercise habits, to help reduce obesity? The US Surgeon General says that today's children may be the first in history to have a shorter life expectancy than their parents. As a young person, how does this make you feel?
In your opinion, what are some factors that contribute to the increase in obesity? What are some changes that you would suggest for people to make to their daily food and exercise habits, to help reduce obesity? The US Surgeon General says that today's children may be the first in history to have a ...
### A laptop computer is smaller than desktop computer. True or false
A laptop computer is smaller than desktop computer. True or false...
### Zzy Division of Marine Boats Corporation had the following results last year (in thousands). Sales $4 comma 700 comma 000 Operating income$ 600 comma 000 Total assets $3 comma 600 comma 000 Current liabilities$ 220 comma 000 Management's target rate of return is 12% and the weighted average cost of capital is 5%. What is the Izzy Division's Residual Income (RI)? A. $72 comma 000 B.$ 432 comma 000 C. $168 comma 000 D.$ 600 comma 000
zzy Division of Marine Boats Corporation had the following results last year (in thousands). Sales $4 comma 700 comma 000 Operating income$ 600 comma 000 Total assets $3 comma 600 comma 000 Current liabilities$ 220 comma 000 Management's target rate of return is 12% and the weighted av...
### Part 2 Connecting Sentences and Clauses Quiz Complete
part 2 Connecting Sentences and Clauses Quiz Complete...
### URGENT PLEASE HELP 45 POINTS! Complete this activity. Given: f(x) = x + 2 [f(x)] 2 =
URGENT PLEASE HELP 45 POINTS! Complete this activity. Given: f(x) = x + 2 [f(x)] 2 =...
### Help me pls I’ll mark you as brain
Help me pls I’ll mark you as brain...
### . What is glaciation? Glaciation is the process by which mountains are formed. Glaciation is the process of melting ice caps due to global warming. Glaciation is the process whereby snow has been converted to ice by the force of gravity and pressure. Glaciation is the process of ravine formation due to extensive ice coverage over a large area of land.
. What is glaciation? Glaciation is the process by which mountains are formed. Glaciation is the process of melting ice caps due to global warming. Glaciation is the process whereby snow has been converted to ice by the force of gravity and pressure. Glaciation is the process of rav...
### What is the subject in the following sentence? After breakfast, she needs to run a few errands A. after B. she C. errands D. breakfast
What is the subject in the following sentence? After breakfast, she needs to run a few errands A. after B. she C. errands D. breakfast...
### If an angle measures 100 degrees, through what fraction of a circle does the angle turn? A. 1/100 B. 1/4 C. 100/360 D. 1/2
If an angle measures 100 degrees, through what fraction of a circle does the angle turn? A. 1/100 B. 1/4 C. 100/360 D. 1/2...
### Analyze the factors that determine the differences in temperature observed with latitude, seasons, and altitude
Analyze the factors that determine the differences in temperature observed with latitude, seasons, and altitude...
### Question 1 Unsaved How much GPE does an object have that is 2m high with a mass of 8kg?
Question 1 Unsaved How much GPE does an object have that is 2m high with a mass of 8kg?...
### What is the central idea of The Canoe breaker
What is the central idea of The Canoe breaker...
### Briefly explain 3 roles that each institution plays in addressing violation of human right
Briefly explain 3 roles that each institution plays in addressing violation of human right...
### PLEASE ILL GIVE BRAINLIEST GIVE ME y=mx+b FORM
PLEASE ILL GIVE BRAINLIEST GIVE ME y=mx+b FORM...
### If the side ratio is 9:12, then the volume ratio is
if the side ratio is 9:12, then the volume ratio is... | 2023-01-30 11:33:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43294757604599, "perplexity": 2949.833213118844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00185.warc.gz"} |
https://blog.splayx.com:10443/?cat=26 | ## HDU 6039 Gear Up 并查集 dfs序 线段树
http://acm.hdu.edu.cn/showproblem.php?pid=6039
# Gear Up
Time Limit: 8000/4000 MS (Java/Others) Memory Limit: 131072/131072 K (Java/Others)
### Problem Description
constroy has some gears, each with a radius. Two gears are considered adjacent if they meet one of the following conditions:
1. They share a common edge (i.e. they have equal linear velocity).
2. They share a common shaft (i.e. they have equal angular velocity).
It is guaranteed that no pair of gears meets both of the above conditions.
A series of continuous adjacent gears constitutes a gear path. There is at most one gear path between each two gears.
Now constroy assigns an angular velocity to one of these gears and then asks you to determine the largest angular velocity among them.
sd0061 thinks this problem is too easy, so he replaces some gears and then asks you the question again.
### Input
There are multiple test cases (about $30$).
For each test case:
The first line contains three integers $n, m, q$, the number of gears, the number of adjacent pairs and the number of operations. $(0 \leq m < n \leq 10^5, 0 \leq q \leq 10^5)$
The second line contains $n$ integers, of which the $i$-th integer represents $r_i$, the radius of the $i$-th gear. $(r_i \in \{2^\lambda \mid 0 \leq \lambda \leq 30\})$
Each of the next $m$ lines contains three integers $a, x, y$, the $x$-th gear and the $y$-th gear are adjacent in the $a$-th condition. $(a \in \{1, 2\}, 1 \leq x, y \leq n, x \neq y)$
Each of the next $q$ line contains three integers $a, x, y$, an operation ruled in the following: $(a \in \{1, 2\}, 1 \leq x \leq n, y \in \{2^\lambda \mid 0 \leq \lambda \leq 30\})$
$a = 1$ means to replace the $x$-th gear with another one of radius $y$.
$a = 2$ means to assign angular velocity $y$ to the $x$-th gear and then determine the maximum angular velocity.
### Output
For each test case, firstly output "Case #$x$:" in one line (without quotes), where $x$ indicates the case number starting from $1$, and then for each operation of $a = 2$, output in one line a real number, the natural logarithm of the maximum angular velocity, with the precision of $3$ digits.
# Differencia
Time Limit: 10000/10000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others)
Problem Description
Professor Zhang has two sequences a1,a2,...,an and b1,b2,...,bn. He wants to perform two kinds of operations on the sequences:
1. + l r x: set ai to x for all lir.
2. ? l r: find the number of i such that aibi and lir.
Input
There are multiple test cases. The first line of input contains an integer T, indicating the number of test cases. For each test case:
The first line contains four integers n, m, A and B (1n105,1m3000000,1A,B216) -- the length of the sequence, the number of operations and two parameters.
The second line contains n integers a1,a2,...,an (1ai109). The third line contains n integers b1,b2,...,bn (1bi109).
As there are too many operations, the m operations are specified by parameters A and B given to the following generator routine.
For the i-th operation, first call rnd(last) three times to get l, r and x (i.e. l = rnd(last) % n + 1, r = rnd(last) % n + 1, x = rnd(last) + 1). Then if l>r, you should swap their value. And at last, the i-th operation is type ?, if (l+r+x) is an even number, or type + otherwise.
Note: last is the answer of the latest type ? operation and assume last=0 at the beginning of each test case.
Output
For each test case, output an integer S=(i=1mizi) mod (109+7), where zi is the answer for i-the query. If the i-th query is of type +, then zi=0.
Sample Input
3
5 10 1 2
5 4 3 2 1
1 2 3 4 5
5 10 3 4
5 4 4 2 1
1 2 3 4 5
5 10 5 6
5 4 5 2 1
1 2 2 4 5
Sample Output
81
88
87
Author
zimpha
Source | 2018-12-16 16:06:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6795350909233093, "perplexity": 983.82399203212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827769.75/warc/CC-MAIN-20181216143418-20181216165418-00021.warc.gz"} |
https://www.physicsforums.com/threads/russells-paradox-and-logical-errors-in-the-proof.687002/page-2 | # Russell's paradox and logical errors in the proof
Dear Stephen Tashi,
Your arguments show that R cannot exist because assuming it exists, contradicts the axiom that "If S is a set and x is an element then either x is a member of S or x is not a member os S".
My arguments are to demonstrate that there is a logical error in each of the two logical arguments in Russell's paradox. :) Please see my message#20 in this thread :)
Yours,
Dan
rubi
Dear Dan,
I don't understand what you are trying to say. I have shown (using conventional predicate logic) that the statement $\exists R\forall x (x\in R \leftrightarrow x\notin x)$ implies a contradiction ($(\exists R\forall x (x\in R \leftrightarrow x\notin x))\rightarrow\bot$).
Now i need you to answer the following question: Do you agree that this derivation is correct?
If the answer is no: Which step in the derivation is wrong? (You should be able to answer this question in one sentence, because you only need to point at one single step!) To make it even more clear, here are is the derivation again:
1. $\exists R\forall x (x\in R \leftrightarrow x\notin x)$
2. $\forall x (x\in S \leftrightarrow x\notin x)$ (from 1; if there exists such an R, i can assign it to an unused variable)
3. $S\in S \leftrightarrow S\notin S$ (from 2; if it holds for all x, it also holds for R)
4. $\bot$ (from 3; it's a contradiction)
5. $(\exists R\forall x (x\in R \leftrightarrow x\notin x))\rightarrow\bot$ (by the decuction theorem)
If the answer is yes, you have the following options: Either you accept that $\exists R\forall x (x\in R \leftrightarrow x\notin x)$ is false or you are willing to accept contradictory statements in your axiomatic system. Which one of these options do you choose?
Last edited:
Dear Rubi,
You correctly applied the axiom of predicate calculus.
Have you read my message #20 above? Have you understood everyithing there?
Yours,
Dan
rubi
Have you read my message #20 above?
Yes, i have.
Have you understood everyithing there?
No i haven't. That's why i want you to answer my question. I'm not willing to spend more time on this if you refuse to answer my questions.
Dear Rubi,
"∃R∀x(x∈R↔x∉x) is false" - this is correct, of course :) There is no such R :). I am not making any alternative "my" axiomatic system )))).
What exactly (which line) have you failed to understand in my message#20?
Yours,
Dan
rubi
"∃R∀x(x∈R↔x∉x) is false" - this is correct, of course :) There is no such R :).
Then we are done, because we came to the same conclusion as Russel did. There is no way to define the set $R = \{x:x\notin x\}$ in usual predicate logic without implying a contradiction. This is exactly Russels paradox.
What exactly (which line) have you failed to understand in my message#20?
"Both assumptions (R∈R is true and R∈R is false) contradict the definition of R" doesn't make sense, because you can't contradict a definition. Your use of language is wrong.
Dear Rubi,
Then we are done, because we came to the same conclusion as Russel did. There is no way to define the set $R = \{x:x\notin x\}$ in usual predicate logic without implying a contradiction. This is exactly Russels paradox.
Let R be the set of all sets that are not members of themselves. Then it is a member of itself if and only if it is not a member of itself. - paradoxical incoherence.
The usual conclusion that we make from the paradox is that there is no such R.
"Both assumptions (R∈R is true and R∈R is false) contradict the definition of R" doesn't make sense, because you can't contradict a definition. Your use of language is wrong.
"Let ABC be a triangle. Suppose it has four angles." :) - well, this is a sort of joke ))
If seriously, enter "contradicts the definition" into google serach and find some math texts ;)
Yours,
Dan
rubi
Let R be the set of all sets that are not members of themselves. Then it is a member of itself if and only if it is not a member of itself. - paradoxical incoherence.
We proved this and you already said that you accept the derivation.
The usual conclusion that we make from the paradox is that there is no such R.
That is true. If a statement leads to a contradiction, then it must be false. You also agreed on this.
If you really agree that one can derive a contradiction from the assumption of the existence of the Russel set, then I don't see what problems remain.
"Let ABC be a triangle. Suppose it has four angles." :) - well, this is a sort of joke ))
If seriously, enter "contradicts the definition" into google serach and find some math texts ;)
What I was trying to say was that it is irrelevant for the proof whether something contradicts the definition of the Russel set. $R\in R$ is a well-formed formula and thus you are allowed to use it according to the rules of predicate logic.
Dear Rubi,
I said you correctly used the axiom of predicate logic ))
Actually, I suspect you did not even try to read my message#20 after the sentence "Both assumptions (R∈R is true and R∈R is false) contradict the definition of R (Let R be the set of all sets that are not members of themselves)"
Which makes further discussion with you senseless ))
Yours,
Dan
rubi
I said you correctly used the axiom of predicate logic
So either you agree that the existence of R implies $R\in R\leftrightarrow R\notin R$ or you claim that using predicate logic is not a valid way of reasoning. I have still not found out, which of these two options you advocate.
Actually, I suspect you did not even try to read my message#20 after the sentence "Both assumptions (R∈R is true and R∈R is false) contradict the definition of R (Let R be the set of all sets that are not members of themselves)"
Which makes further discussion with you senseless
I have read it, but i don't see what the problem is. Let's look at this part for example:
Argument 1
Premise 1: Let R be the set of all sets that are not members of themselves R = {x: x∉x}
Premise 2 (assumption): Suppose R ∈ R.
Conclusion: Then, according to its definition, R ∉ R.
We can easily phrase this in predicate logic:
1. $\forall x (x \in R \leftrightarrow x\notin x)$ (this is your first premise)
2. $R \in R$ (this is your second premise)
3. $R \in R \leftrightarrow R\notin R$ (from 1 by setting $x=R$)
4. $R \notin R$ (from 2 and 3 by modus ponens)
This is completely valid reasoning. $(\forall x (x \in R \leftrightarrow x\notin x))\rightarrow R\notin R$ is a theorem.
Dear Rubi,
Yes, in predicate logic Argument 1 is a theorem. I know it ))
My point is that assumption R ∈ R contradicts the definition of R because if R ∈ R, R includes a member that is included in itself (R itself is such a member).
Or, symbolically: R ∈ R → ∃ y: y ∈ R ∧ y ∈ y → R ≠ {x: x∉x}: indeed, such y exists as we can take y = R
That is the second premise contradicts the first one. "Contradictory premises" logical error.
Like in the below:
Premise 1: Let Dan be a completely legless man
Premise 2: Suppose, Dan’s right ankle is severely bleeding
Conclusion: Then, according to his definition, Dan should be taken to an emergency for legless people (for his ankle bleeding).
The reasoning on `legless Dan' contains the same logical error as the Russell's paradox does: the second premise contradicts the definition which is the first premise though to make the conclusion both premises are used.
Yours,
Dan
rubi
My point is that assumption R ∈ R contradicts the definition of R
And why is that a bad thing? You can assume whatever you want. The entire point of Russels paradox is to show that you end up with contradictions. If you don't want contradictions, then just don't assume that the Russel set exists.
symbolically: R ∈ R → ∃ y: y ∈ R ∧ y ∈ y → R ≠ {x: x∉x}: indeed, such y exists as we can take y = R
This is just yet another contradiction that you can derive if you assume the existence of the Russel set. Remember that if you have one contradiction in your axioms, you can derive every statement that you can think of.
Well-know American logician H. Curry once expressed the opinion that in spite of the fact that it seemed to be absolutely impossible to explain Russell's paradox in terms of conventional 19th century logic, it may happen in modern days that some error would be identified.
Can you give a reference where Curry said that?
Dear Micromass,
Hi! Remember me? ))
The reference is in the reference section of the paper)) It is a book by a Russian logician Ivin. He mentions this. By the way I am not quoting.
Yours,
Dan
Last edited:
Dear Rubi,
It is a logical error to make conclusion such a way )) Which is not a good thing )))
Like in the below:
Premise 1: Let Dan be a completely legless man
Premise 2: Suppose, Dan’s right ankle is severely bleeding
Conclusion: Then, according to his definition, Dan should be taken to an emergency for legless people (for his ankle bleeding).
Well, if you consider the example to be OK, Russell's paradox is OK as well
Yours,
Dan
Last edited:
rubi
It is a logical error )) Which is not a good thing )))
It just shows that you can derive logical if you assume the existence of the Russel set, you can derive contradictions. This is nothing new. It has been known for more than 100 years now. The conclusion is that you should not assume the existence of the Russel set or any axiom that implies the existence of the Russel set.
Like in the below:
Premise 1: Let Dan be a completely legless man
Premise 2: Suppose, Dan’s right ankle is severely bleeding
Conclusion: Then, according to his definition, Dan should be taken to an emergency for legless people (for his ankle bleeding).
You just showed that if you assume contradictory statements, then you end up with contradictions.
Well, if you consider the example is OK, Russell's paradox is OK as well
Depends on what you mean by "Russel's paradox is OK". If you mean that one can safely assume the existence of the Russel set, then it's not OK. But if you mean that the existence of the Russel set implies contradictions, then it's perfectly OK, because nobody forces us to assume it's existence. We can just deny it and everyone is happy until someone finds a new paradox. You can easily disprove the existence of the Russel set in modern set theory.
Dear Rubi,
Reasoning on legless Dan is an example of "contradictory premises" logical error.
The same one as in Russell's paradox.
Yours,
Dan
Fredrik
Staff Emeritus
Gold Member
We all agree that the following statement is a theorem: There's no set ##R## such that ##R=\{x\,|\,x\notin x\}##.
(I proved it in #13, and rubi did it in several of his posts).
Dan is arguing that some attempted proofs are flawed, because they're making two contradictory assumptions. The two assumptions are ##R\in R## and ##R\notin R##.
In that case, I think Dan's concern is very easy to answer: No one is assuming that both of those statements are true.
Dan, I don't know how can you continue to claim that there's an error in "Russell's paradox" after agreeing that there are valid ways to prove the theorem. Do you mean something different from that theorem when you say "Russell's paradox"?
rubi
Reasoning on legless Dan is an example of "contradictory premises" logical error.
The same one as in Russell's paradox.
You can apply the deductive rules of predicate logic to any set of axioms you like. Some choices for your axioms might give you contradictions, though. The conclusion is that you should choose other axioms. This is exactly what happened: We found that naive set theory implies a contradiction (Russels paradox), so we rejected it and invented ZFC instead. Now we hope that nobody finds a contradiction in ZFC anymore.
Fredrik
Staff Emeritus
Gold Member
Dan, I'm giving you one last chance to answer what I said in post #43 before I close the thread. Don't copy and paste from earlier. Is your entire paper, and this entire thread, based on the idea that some people are assuming both ##R\in R## and ##R\notin R##?
Dear Fredrik,
No!!! You missed the point, sorry. Please try to read my message#20 ))
I thought you missed it - that is why I wanted to put it here again...
Yours,
Dan
Fredrik
Staff Emeritus
Gold Member
I read post #20 again. OK, I think I see what you're saying: The two premises that you say are contradictory are not ##R\in R## and ##R\notin R##. It's (in argument 1) ##R\in R## and ##R=\{x\,|\,x\notin x\}##.
If that's what you meant, then my answer is that this is irrelevant, since no one considers argument 1 (or argument 2) to be a complete proof. ##R\in R## is not one of the assumptions that go into the proof. There's only one assumption (which is made only to obtain a contradiction), and that is that ##R=\{x\,|\,x\notin x\}##.
Also, no one is saying that arguments 1 and 2 together prove the theorem, because then we'd have three assumptions that don't agree with each other.
Do you agree that there is a valid proof of the theorem I stated in #43? (I'm thinking of the proof I posted in #13 and the similar proofs posted by rubi).
Like in the below:
Premise 1: Let Dan be a completely legless man
Premise 2: Suppose, Dan’s right ankle is severely bleeding
Conclusion: Then, according to his definition, Dan should be taken to an emergency for legless people (for his ankle bleeding).
These are not well-formed formula's, so I don't see the point of considering these statements. They imply nothing about math. Math only deals with well-formed formulas.
Dear Fredric,
If that's what you meant, then my answer is that this is irrelevant, since no one considers argument 1 (or argument 2) to be a complete proof.
Nor do I! My point is that each of them separately is a fallacious, that is, containing a logical error, argument.
Do you agree that there is a valid proof of the theorem I stated in #43? (I'm thinking of the proof I posted in #13 and the similar proofs posted by rubi).
Well it depends on what you mean by valid...
As to me it contains logical errors but at the same time it is a theorem in predicate logic (that is quite deducable, valid and correct thing in this formal system).
Yours,
Dan
Dear Micromass,
This is an example of the logical error.
I could not invent the example of this logical error in math different from Russell's paradox-like things...
Yours,
Dan | 2021-10-21 09:00:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8551815152168274, "perplexity": 765.4572163492886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00516.warc.gz"} |
https://www.zbmath.org/?q=an%3A1410.90045 | # zbMATH — the first resource for mathematics
Transient analysis of an $$M/M/1$$ queue with impatient behavior and multiple vacations. (English) Zbl 1410.90045
Summary: In this paper, we carry out an analysis for a single server queue with impatient customers and multiple vacations where customers impatience is due to an absentee of servers upon arrival. Customers arrive at the system according to a Poisson process and exponential service times. Explicit expressions are obtained for the time dependent probabilities, mean and variance of the system size in terms of the modified Bessel functions, by employing the generating functions along with continued fractions and the properties of the confluent hypergeometric function. Finally, some numerical illustrations are provided.
##### MSC:
90B22 Queues and service in operations research 60K25 Queueing theory (aspects of probability theory)
Full Text:
##### References:
[1] Doshi, B., Queueing systems with vacations—A survey, Queue. Syst., 1, 29-66, (1986) · Zbl 0655.60089 [2] Takagi, H., Queueing Analysis: A Foundation of Performance Evaluation, vol. 1, (1991), North-Holland Amsterdam [3] Tian, N.; Zhang, Z., Vaction Queueing Models—Theory and Applications, (2006), Springer-Verlag New York [4] Al-Seedy, R. O.; El-Sherbiny, A. A.; El-Shehawy, S. A.; Ammar, S. I., Transient solution of the M/M/c queue with balking and reneging, Comput. Math. Appl., 57, 1280-1285, (2009) · Zbl 1186.90033 [5] Altman, E.; Yechiali, U., Analysis of customers’ impatience in queues with server vacation, Queue. Syst., 52, 261-279, (2006) · Zbl 1114.90015 [6] Altman, E.; Yechiali, U., Infinite server queues with systems’ additional task and impatient customers, Probab. Eng. Inform. Sci., 22, 477-493, (2008) · Zbl 1228.60096 [7] Kalidass, K.; Gnanaraj, J.; Gopinath, S.; Ramanath, K., Transient analysis of an M/M/1 queue with a repairable server and multiple vacations, Int. J. Math. Oper. Res., 6, 2, 193-216, (2014) · Zbl 1390.90225 [8] Kalidass, K.; Ramanath, K., Time dependent analysis of M/M/1 queue with server vacations and a waiting server, The 6th International Conference on Queueing Theory and Network Applications (QTNA’11), (2011), Seoul, Korea [9] Indra; Renu, Transient analysis of Markovian queueing model with Bernoulli schedule and multiple working vacations, Int. J. Comput. Appl., 20, 43-48, (2011) [10] Sudhesh, R.; Raj, L. F., Computational analysis of stationary and transient distribution of single server queue with working vacation, Global Trend. Comput. Commun. Syst. Commun. Comput. Inform. Sci., 269, 480-489, (2012) [11] Yang, D. Y.; Wu, Y. Y., Transient behavior analysis of a finite capacity queue with working breakdowns and server vacations, Proceedings of The International MultiConference of Engineers and Computer Scientists, 1151-1156, (2014) [12] Kalidass, K.; Ramanath, K., Transient analysis of an M/M/1 queue with multiple vacations, Pakistan J. Stat. Oper. Res., 10, 121-130, (2014) · Zbl 1390.90225 [13] Abramowitz, M.; Stegun, I., Handbook of Mathematical Functions, (1965), Dover New York [14] Lorentzen, L.; Waadeland, H., Continued fractions with applications, Studies in Computational Mathematics, vol. 3, (1992), Elsevier Amsterdam, http://www.amazon.com/Continued-Fractions-Applications-Computational-Mathematics/dp/0444892656 · Zbl 0782.40001 [15] Gradshteyn, I.; Ryzhik, I.; Jeffrey, A.; Zwillinger, D., Table of Integrals, Series, and Products, (2007), Academic Press, Elsevier [16] Ammar, S. I., Transient behavior of a two-processor heterogeneous system with catastrophes, server failures and repairs, Appl. Math. Model., 38, 2224-2234, (2014) · Zbl 1427.60185
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-08-02 00:32:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998452544212341, "perplexity": 6564.7048711567995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00395.warc.gz"} |
https://www.physicsforums.com/threads/two-port-network-z-parameters.797034/ | # Two-port network z-parameters
As above.
## The Attempt at a Solution
(z11) With the output open circuit (and therefore I2=0) z11 = V1 / I1 = (R8 + R6)I1 / I1 = 14ohms;
(z12) With the input open circuit (and therefore I1=0) z12 = V1 / I2 = R6 = 6ohms;
(z21) With the output open circuit (and therefore I2=0) z21 = V2 / I1 = V1 x (R6 / (R8+R6) ) x ( (R8+R6) / V1) = R6 = 6ohms;
(z22) With the input open circuit (and therefore I1=0) z22 = V2 / I2 = R6xI2 / I2 = R6 = 6ohms
Is the above correct? It seems so easy that I'm worried I've overlooked something and made a really basic mistake!
#### Attachments
• 48.3 KB Views: 824
Last edited by a moderator:
Related Engineering and Comp Sci Homework Help News on Phys.org
BvU
Homework Helper
2019 Award
Hello acx, welcome to PF :)
Some things are easy :) enjoy while it lasts...
Thanks BvU, I shall do just that!
David J
Gold Member
Where does the value for $I1$ come from? In the first equation for $z11$ where $I2=0$ there appears to be a value of 1 for $I1$. Then in the next equation for $z12$ and $I1 =0$ there appears to be a value of 1 for $I2$. How are these values derived please as I cant seem to work out and my notes tell me virtually nothing. Appreciated thanks
NascentOxygen
Staff Emeritus
Where does the value for $I1$ come from? In the first equation for $z11$ where $I2=0$ there appears to be a value of 1 for $I1$. Then in the next equation for $z12$ and $I1 =0$ there appears to be a value of 1 for $I2$. How are these values derived please as I cant seem to work out and my notes tell me virtually nothing. Appreciated thanks
Can you enlarge your screen or change the default font to see print more clearly?
z11 = V1 / I1 = (R8 + R6) I1 / I1 = 14ohms
The formulae come from applying the z-parameter equations characterising a 2 port network.
For z11 assume that the input is not open circuit. Therefore I1 is not equal to zero. I1 then cancels out from the equation leaving z11 = R8+R6. You don't need a value for I1, just that it doesn't equal zero.
For z12 again it doesn't matter what value you have for I2. You show that z12 = R6 = 6ohms. The value of I2 becomes irrelevant.
David J
Gold Member
OK thanks for the info.
So as I understand this $Z11$ is the impedance seen looking into port 1 when port 2 (output) is open
.
Output is open circuit so $I2=0$
$Z11=\frac{V1}{I1}=\frac{(R8+R6)(I1)}{I1}$ The $I1$ cancels leaving $R8+R6=14\Omega$
So for $Z22$ is the impedance looking into port 2 when port 1 (input) is open
Input is open circuit so $I1=0$
$Z22=\frac{V2}{I2}=\frac{(R6)(I2)}{I2}$ The $I2$ cancels leaving $R6=6\Omega$
The reason $Z22$ only has $R6$ to deal with is because looking into the network from port 2 we can only see the $6\Omega$ resistance
So for $Z12$ i understand this is the ratio of the voltage at port 1 to the current at port 2 when port 1 is open. If port 1 is open the $I1=0$
You have used the equation $Z12 =\frac{V1}{I2} = R6 = 6\Omega$
My question is why does $V1=6\Omega$ this time whereas when for $Z11,V1=R8 + R6$
Thanks
NascentOxygen
Staff Emeritus
You have used the equation $Z12 =\frac{V1}{I2} = R6 = 6\Omega$
My question is why does $V1=6\Omega$ this time whereas when for $Z11,V1=R8 + R6$
Inject current at top terminal 2 and it exits at lower terminal 2. With port 1 open, determine V1.
David J
Gold Member
So current will flow through $R6$ and back out via the lower terminal 2 because port 1 is open. So R8 does not come into the equation so to speak??
I have a better understanding of this now, yes. Thanks for the help
rude man
Homework Helper
Gold Member
12!
Sneak extra info: for a passive network, x12 = x21. x can be either z or y parameters. (The rules for h, a, s parameters differ slightly).
## Homework Statement
View attachment 112127
As above.
## The Attempt at a Solution
(z11) With the output open circuit (and therefore I2=0) z11 = V1 / I1 = (R8 + R6)I1 / I1 = 14ohms;
(z12) With the input open circuit (and therefore I1=0) z12 = V1 / I2 = R6 = 6ohms;
(z21) With the output open circuit (and therefore I2=0) z21 = V2 / I1 = V1 x (R6 / (R8+R6) ) x ( (R8+R6) / V1) = R6 = 6ohms;
(z22) With the input open circuit (and therefore I1=0) z22 = V2 / I2 = R6xI2 / I2 = R6 = 6ohms
Is the above correct? It seems so easy that I'm worried I've overlooked something and made a really basic mistake!
Hi, sorry to be this a pain as I know this thread is a couple of years old but I'm now working through the same question, was just wondering if anyone could explain why for z21 V2 was changed to V1 x (R6 / (R8+R6) ) x ( (R8+R6) / V1) and how that equals 6 ohms? I'm looking to understand this as I don't want to 'just know the answer'. Is this because the R8 & R6 are effectively in parallel like the attached drawing?
Also is this approach also acceptable?
Using the formulas V1 = z11*I1+z12*I2 and also V2 = z21*I1+Z22*I2
For inpute open:
Z22 = V2 / I2 = (Z*I2) / I2 = z22 = 6 ohms
For output open:
z11= V1 / I1 = (Z*I1) / I1 = 8+6 = Z11 = 14 ohms
I am happy with the above as the the formula rearranges nice and easily however with the following two I end up with the 'wrong V over the wrong I'. Unsure of how to work around this without values for V or I
Z12 = V1 / I2 =
Z21= V2 / I1 =
Appreciate any help!!
#### Attachments
• 56.4 KB Views: 519
Last edited:
rude man
Homework Helper
Gold Member
z12 = v1/i2 |i1=0
z21 = v2/i1 |i2=0
what do you compute?
(Fact: in any passive 2-port network, z12 = z21). If you have any active element in your network like a transistor, this does not necessarily hold).
Okay so if z12 = z21
V1 / I2 = V2 / I1
which I could rearrange to
V1 * I1 = V2 * I2
V1 = (V2*i2) / I1 & V2 = (V1*I1) / I2
and
I1 = (V2*I2) / V1 & I2 = (V1*I1) / V2
so
I1 = (V2*I2) / ((V2*I2) / I1)
I1*I1 = (V2*I2) / ((V2*I2)
I1*I1 = (Z*I2*I2) / ((Z*I2*I2)
I1*I1 = 0 ???
V1 = (V2*i2) / I1 & V2 = (V1*I1) / I2
V1 = (((V1*I1) / I2)*i2) / I1 = (V1*I1) / I1 = V1 ???
z12 = ((V2*i2) / I1) / ((V1*I1) / V2)
z12 = (V2*i2) / I1) * (V2 / (V1*I1))
z12 = (I2*V2^2) / (V1*I1^2)
Safe to say I'm confused haha
To try and understand the current flow path, I think I understand why Z12 = 6ohms as it passes only through the 6ohm resistor ( first crude drawing )
I would've though that Z21 = 14ohms as well as it would take the red path ( second drawing), can't quite understand why it is 6ohms unless it takes the blue path, meaning that the 8 & 6 are in parallel?
#### Attachments
• 55 KB Views: 731
rude man
Homework Helper
Gold Member
To try and understand the current flow path, I think I understand why Z12 = 6ohms as it passes only through the 6ohm resistor ( first crude drawing )
I would've though that Z21 = 14ohms as well as it would take the red path ( second drawing), can't quite understand why it is 6ohms unless it takes the blue path, meaning that the 8 & 6 are in parallel?
View attachment 237875
Right, the red path. Let's look at the red path.
So z21 = v2/i1 with i2=0. So I run a current i1 into the input, what's the output voltage with the output open?
Sorry, I assume you mean the red path in the second drawing? If so current would flow from top terminal 1 to bottom terminal one, so with both terminal 2s open the output voltage would equal the voltage across the 6 ohm resistor?
Wait, so would:
Z21 = V2 / I1 = (6 * I1) / I1 = 6ohms as I1 cancels out?
Last edited:
Then Z12 = V1/ I2 , the current would act as in the first drawing so V1 = I2 * Z
So:
Z12 = (Z*I2)/ I2 = (6 * I2) / I2 = 6 ohms
If this is the case then I now don't understand why Z11 is 14 ohms rather than the same as Z21 with 6, unless it is because from the perspective from Z21 the 8 ohm resistor is downstream of the 6ohm?
Last edited:
rude man
Homework Helper
Gold Member
Then Z12 = V1/ I2 , the current would act as in the first drawing so V1 = I2 * Z
So:
Z12 = (Z*I2)/ I2 = (6 * I2) / I2 = 6 ohms
If this is the case then I now don't understand why Z11 is 14 ohms rather than the same as Z21 with 6, unless it is because from the perspective from Z21 the 8 ohm resistor is downstream of the 6ohm?
Well, you got z12 and z21 right.
So, let's re-examine z11: z11 = v1/i1 with i2 zero (nothing connected to port 2). You put in v1 volts, what do you get for the current?
BTW although I told you about z12 = z21, you shouldn't use that fact until you're comfortable with solving for them with the basic equations. It takes a while and you'd benefit from the double-check. And also BTW you may know there are several other two-port parameters, like yij, hij, aij. You should be able to find those from the respective equations also.
Okay so Z11 = V1 /I1 with V1 volts would mean that current = I1. I think I maybe understand now.
So Z11 = V1 / I1, voltage will run from top terminal 1 to bottom terminal 1 so will. = 14 ohms
However for Z21 = V2 / I1, V2 contacts will be in parallel with the 6 ohm resistor, so Z21 = (Z6+I1)/ I1 = 6 ohms
Okay noted I'll try learn to use the basic ones first.
rude man
Homework Helper
Gold Member
Okay so Z11 = V1 /I1 with V1 volts would mean that current = I1. I think I maybe understand now.
Z21 = (Z6+I1)/ I1 = 6 ohms
What kind of equation is this? But your answer is right. So are z12 and z21.
Sorry I'm not sure what you mean by type of equation? Basic ohms law or?
rude man
Homework Helper
Gold Member
Sorry I'm not sure what you mean by type of equation? Basic ohms law or?
Different two-port parameters have different equations.
For example, the y parameters are defined by the equations
i1 = y11 v1 + y12 v2
i2 = y21 v1 + y22 v2
etc. You can look upthe ones for the h or a or other parameters. There is even a set (the "s" parameters) based on incident, transmitted and reflected waves. They're particularly appropriate for microwave and other high-frequency circuits. You won't run into those unless you're going for an EE degree, probably.
I should also mention that all parameters can be complex transforms, i.e. have real and imaginary parts, if there are inductive and/or capacitive components in the network. For example, jωL for an inductor's impedance etc.
What kind of equation is this? But your answer is right. So are z12 and z21.
Yeah I was using V1 = z11*I1+z12*I2 with I2=0 so that V1 = z11*I1 which I rearranged to z11 = V1/I1
Just wondering if you could explain in algebraic terms what the original post of:
z21 = V2 / I1 = V1 x (R6 / (R8+R6) ) x ( (R8+R6) / V1) = R6 = 6ohms;
is?
Last edited: | 2020-04-10 05:26:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3553723096847534, "perplexity": 2244.538548977195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00236.warc.gz"} |
https://zs2.grajewo.pl/7kyhc/low-spin-complex-examples-db31b0 | Since oxidation state of iron is still +3, there are still 5 electrons in 3d subshell in [Fe(H2O)6]3+ complex. Dr. Said El-Kurdi 36 Need an experienced tutor to make Chemistry simpler for you? The structure of the complex differs from tetrahedral because the ligands form a … Distribution of Electrons in an Octahedral Complex d4 There are two possibilities for metal ions having d 4-d7 electronic configuration. if we know from magnetic data that [Co(OH 2) 6]3+ is low-spin, then from the spectrochemical series we can say that [Co(ox) 3] 3 and [Co(CN) 6] will be low-spin. Introduction. 18 Electron Rule (Section 13.3) The 18 electron rule is a loose formalism for describing stable electron configurations for some transition metal coordination complexes. ligand (high spin) so the electron configuration is t2g3eg2with LFSE = 0. We can also determine the electron in box diagram for 3d subshell. Characteristics of outer orbital complexes - definition The d-orbitals involved in the hybridization may be inner d-orbitals, (n-1) d-orbitals, or the outer d-orbitals, nd-orbitals. d 1; d 2; low spin d 4 & d 5; high spin d 7 & d 7 configurations. Square Planar Geometry. Join my 2000+ subscribers on my YouTube Channel for new A Level Chemistry video lessons every week. CN- is a strong ligand and will cause the energy gap between d to d* level to be larger. The octahedral ion [Fe (NO 2) 6] 3−, which has 5 d -electrons, would have the octahedral splitting diagram shown at right with all five electrons in the t2g level. Hence the d electrons will ignore the small energy difference and be filled in the same way as in gaseous Fe3+ cation, where electrons will occupy orbitals singly and with parallel spins. Transition metal complexes can exist as high spin or low spin depending on the strength of the ligands. Since they contain unpaired electrons, these high spin complexes are paramagnetic complexes. E.g. Chemistry Guru | Making Chemistry Simpler Since 2010 | A Level Chemistry Tuition | Registered with MOE | 2010 - 2019, Notice there are 5 unpaired electrons in 3d subshell for Fe, Since oxidation state of iron is still +3, there are still 5 electrons in 3d subshell in [Fe(H, Hence the d electrons will ignore the small energy difference and be filled in the same way as in gaseous Fe. Crystal field theory was established in 1929 treats the interaction of metal ion and ligand as a purely electrostatic phenomenon where the ligands are considered as point charges in the vicinity of th… What does molecular orbital theory... What are the orbitals and the hybridization of the #["O"_2"NO"]^"- Electrons and Orbitals. Crystal field splitting is larger for complexes of the heavier transition metals than for the transition metals discussed above. Ligands are chemical species that are involved in the formation of complexes with metal ions. Note that if there are 1-3 or 8-9 d electrons in an octahedral complex, the spin-only magnetic moment will have the same value irrespective of whether the ligands present are considered weak field or strong field. Octahedral high spin: Cr 2+, 64.5 pm. "# ion? increasing ∆O The value of Δoalso depends systematically on the metal: 1. A square planar complex also has a coordination number of 4. Comparing both high spin and low spin complexes: Chemistry Guru | Making Chemistry Simpler Since 2010 |. High spin complexes are coordination complexes containing unpaired electrons at high energy levels. In fact, I am digressing here, but the same factors also cause the octahedral complexes to be almost invariably low-spin. Ionic radii. Do consider signing up for my A Level H2 Chemistry Tuition classes at Bishan or online tuition classes! Please LIKE this video and SHARE it with your friends! The electronic configuration for Fe3+ is given as 1s2 2s2 2p6 3s2 3p6 3d5. Notice there is now only 1 unpaired electron, hence hexacyanoferrate(III) complex is considered a low spin complex. A high spin energy splitting of a compound occurs when the energy required to pair two electrons is greater than the energy required to place an electron in a high energy state. Orbitals close in energy simultaneously fill more easily and vice versa. (i) If Δ0 > P, the configuration will be t2g, eg. The high-spin octahedral complex has a total spin state of #+2# (all unpaired #d# electrons), while a low spin octahedral complex has a total spin state of #+1# (one set of paired #d# electrons, two unpaired). And so, depending on the magnitude of #Delta_o#, there are two cases. The complexes formed, if have inner d orbitals are called low spin complexes or inner orbital complexes and if having outer d orbitals are called high spin or outer orbital complex. For example, NO 2− is a strong-field ligand and produces a large Δ. WE HAVE A WINNER! Again, in this case also the ligands are not pointing towards the orbitals directly and hence there is … based on the denticity of the ligand. Example $$\PageIndex{2}$$: CFSE for a Low Spin $$d^7$$ complex. I assume you know the basic facets of crystal field theory: The crystal field splitting energy is called #Delta_o# in an octahedral field for simplicity, and the resultant #d# orbital splitting is: #uarrE" "color(white)({(" "" "color(black)(ul(color(white)(uarr darr))" "ul(color(white)(uarr darr))" "e_g^"*")),(color(black)(Delta_o)),(" "color(black)(ul(color(white)(uarr darr))" "ul(color(white)(uarr darr))" "ul(color(white)(uarr darr))" "t_(2g))):})#. In truth it depends on (at least) the ligand, the metal, as well as the oxidation state, and there is no magic formula or rule that allows you to combine all three factors. Of course, I am exaggerating the energy scale, but hopefully that brings the point across. (majority low spin) ... planar complexes are usually low-spin d8. Notice there is now only 1 unpaired electron, hence hexacyanoferrate(III) complex is considered a low spin complex. The spectrochemical seriesis a list of ligands (attachments to a metal ion) arranged in order of their field strength. However, we still need to include the pairing energy. Octahedral Geometry. Check out other A Level Chemistry Video Lessons here! For example, a low-spin d 8 transition metal complex is usually square planar substitutionally inert with no unpaired electrons. Square planar is the geometry where the molecule looks like a square plane. Select the correct statement regarding [C r (e n) 2 C l 2 ] + and [C o (C 2 O 4 ) 2 (N H 3 ) 2 ] complex ions View solution On the basis of crystal field theory explain why C o ( I I I ) forms paramagnetic octahedral complex with weak field ligands whereas it forms diamagnetic octahedral complex … How can I calculate the bond order of benzene? d 5 Octahedral high spin: Fe 3+, the ionic radius is 64.5 pm. Complexes such as this are called low spin. This means these complexes can be attracted to an external magnetic field. Number of d electrons and configuration. Other examples of such square planar complexes are $\ce{[PtCl4]^2-}$ and $\ce{[AuCl4]^-}$. Found this A Level Chemistry video useful? The lability of a metal complex also depends on the high-spin vs. low-spin configurations when such is possible. 4 u.e. Denticity is the number of donor groups pr… A complex may be considered as consisting of a central metal atom or ion surrounded by a number of ligands. It isn't possible to form the entire series by studying complexes with a single metal ion; the series has been developed by overlapping different sequences obtained from spectroscopic studies. 16. The spin state of the complex also affects an atom's ionic radius. What is the Crystal Field Stabilization Energy for a low spin $$d^7$$ octahedral complex? Because of same reason, the tetrahedral complexes also do not exhibit Jahn-Teller distortion. •high-spin complexes for 3d metals* •strong-field ligands •low-spin complexes for 3d metals* * Due to effect #2, octahedral 3d metal complexes can be low spin or high spin, but 4d and 5d metal complexes are alwayslow spin. In contrast, a high-spin d 8 transition metal complex is usually octahedral, substitutionally labile, with two unpaired electrons. What are molecular orbital theory and valence bond theory? (c) Low spin complexes can be paramagnetic. For example, NO2− is a strong-field ligand and produces a large Δ. It requires too much energy to put the d electrons at the higher d* level, so electrons will pair up at the lower d level first. In contrast, the low-spin iron(II) complex K 4 [Fe(CN) 6] appears pale yellow because it absorbs higher-energy violet photons. The order of common ligands according to their increasing ligand field strength is on this list: This series is used qualitatively. Notice there are 5 unpaired electrons in 3d subshell for Fe3+. d 4. Figure 7. The concept of ligands is discussed under coordination chemistry. For octahedral complexes, the splitting pattern is 2 orbitals at higher d* level and 3 orbitals at lower d level. How can I read molecular orbital diagram? - a weak ligand such as H2O will cause a smaller d-d* energy gap and tend to form high spin complexes- a strong ligand such as CN- will cause a larger d-d* energy gap and tend to form low spin complexes, Topic: Transition Elements, Inorganic Chemistry, A Level Chemistry, Singapore. Some common examples include Cr 3 +, Co 3 +, and Ni 2 +. the greater the tendency towards the complex being inert 3. The effective moment varies from a typical d 5 low-spin value of 2.25 μ B at 80 K to more than 4 μ B above 300 K. 2nd and 3rd row transition metals. Notice there are 5 unpaired electrons, hence hexaaquairon(III) complex is considered a high spin complex. Hence, they are also known as complexing agents. See all questions in Molecular Orbital Theory. In contrast, for transition metal ions with electron configurations d 4 through d 7 (Fe 3+ is d 5), both high-spin and low-spin states are possible depending on the ligand involved. Water is a weak ligand and the energy gap between d to d* level is small. Example: [Ni(CN) 4] 2−. spin complexes. Includes Ni 2+ ionic radius 49 pm. It just categorizes, qualitatively, how the metal #d# orbitals are filled in crystal field theory after they are split by what the theory proposes are the ligand-induced electron repulsions. It just categorizes, qualitatively, how the metal $$d$$ orbitals are filled in crystal field theory after they are split by what the theory proposes are the ligand-induced electron repulsions. For example, the iron(II) complex [Fe(H 2 O) 6]SO 4 appears blue-green because the high-spin complex absorbs photons in the red wavelengths . Depending on the nature of the ligands and the metal they could be high-spin or low-2 u.e. low-spin complexes weak field ligands such as halides tend to favor high-spin complexes. Th… Types of Electronic Transitions in TM Complexes d-d: usually in the visible region relatively weak, ~ 1 – 100 if spin allowed < 0.1 if spin forbidden energy varies with ∆o (or ∆t) LMCT: Ligand to Metal Charge Transfer σL or πL d* very intense, generally in UV or near UV h h Rydberg: localized MO high energy, highly delocalized, deep UV Therefore the d orbitals that interact more with the ligands will have a higher d* energy level, while the d orbitals that interact less will have a lower d energy level. BINGO! For the low-spin case: $LFSE = [(0.6 \times 0) -(0.4 \times 4)] \Delta_{o} = -1.6 \Delta_{o} = -1.6 \times 16000 cm^{-1} = -25600 cm^{-1}$ These LFSE calculations show that the low-spin case is lower in energy, by 14,000 cm-1. (ii) If Δ0 < P, the configuration will be t2g, eg and it is in the case of weak field ligands and high spin complex will be formed. The complexes formed in these two ways are referred to as low spin and high spin complexes or, inner and outer orbital complexes … Solution. The usual Hund's rule and Aufbau Principle apply. Take a #d^6# configuration as an example... #uarrE" "color(white)({(" "" "color(black)(ul(color(white)(uarr darr))" "ul(color(white)(uarr darr))" "e_g^"*")),(),(),(),(),(color(black)(Delta_o)),(),(),(),(),(" "color(black)(ul(uarr darr)" "ul(uarr darr)" "ul(uarr darr)" "t_(2g))):})#, #uarrE" "color(white)({(" "" "color(black)(ul(uarr color(white)(darr))" "ul(uarr color(white)(darr))" "e_g^"*")),(),(color(black)(Delta_o)),(),(" "color(black)(ul(uarr darr)" "ul(uarr color(white)(darr))" "ul(uarr color(white)(darr))" "t_(2g))):})#. The only common high-spin cobalt(III) complex is [CoF 6]3 . Theinteraction between these ligands with the central metal atom or ion is subject to crystal field theory. Usually, octahedral and tetrahedral coordination complexes ar… (e) Low spin complexes contain strong field ligands. (d) In high spin octahedral complexes, oct is less than the electron pairing energy, and is relatively very small. This is a very narrow viewpoint and leads to lots of mistakes: for example [ C o (H X 2 O) X 6] X 3 + is low-spin although H X 2 O is fairly low on the spectrochemical series. Octahedral low spin: Mn 3+ 58 pm. By a number of ligands ( attachments to a variety of transition metals above. List: this series is used qualitatively the central metal atom or ion is subject to crystal field is! Metal ion ) arranged in order of benzene ion is subject to crystal field theory describes a major of. Metals low spin complex examples for the transition metals than for the transition metals is their tendency to form.. Is relatively very small shape of the complex the molecule looks like a square planar is the crystal field describes... These ligands with the central metal atom or ion surrounded by a number 4... Field strength is on this list: this series is used qualitatively >. D level shape of the complex also affects an atom 's ionic radius this and! Planar low-spin: no unpaired electrons in 3d subshell for Fe3+ is as... On this list: this series is used qualitatively number of ligands discussed! Include the pairing energy, and Ni 2 + syllabus but has in... Produces a large Δ same metal in the formation of complexes with metal ions we also! Systematically on the shape of the complex [ CoBr 2 ( NH 3 ) 2 ) as... Can be paramagnetic, I am exaggerating the energy gap between d d... To their increasing ligand field strength is on this list low spin complex examples this series is used qualitatively also as... Rule and Aufbau Principle apply, depending on the shape of the complex ( CN 4... Molecule looks like a square plane low-spin: no unpaired electrons,,. Planar is the crystal field splitting is larger for complexes of the also... And produces a large Δ ligand ( high spin and low spin complex, am! My YouTube Channel for new a level H2 Chemistry Tuition classes at Bishan or online Tuition!... Has a coordination number of 4 be t2g, eg the geometry where the molecule looks like a square.! Of 4 64.5 pm containing unpaired electrons at high energy levels scale, the! Energy scale, but hopefully that brings the point across state of complex! Scale, but the same oxidation state, Fe3+, which is d5 increasing ∆O the value of depends! The spectrochemical seriesis a list of ligands affect the spin of the heavier metals... But has appeared in some Prelim questions a list of ligands affect the spin state of complex!, substitutionally inert is considered a low spin complex 36 both complexes have the same in... ) octahedral complex depending on the metal they could be high-spin or low-2 u.e diagram for 3d.. Ptcl 2 ( en ) 2 ] + a number of ligands that! Same metal in the formation of complexes with metal ions ( c ) spin... In order of benzene spin complex than for the transition metals here, but the same factors also the! Of a metal complex is considered a high spin: Fe 3+, the complexes. Vs. low-spin configurations when such is possible this series is used qualitatively ( I If., hence hexacyanoferrate ( III ) complex is usually square planar low-spin: no unpaired,! Of benzene inner d orbitals are diamagnetic or less paramagnetic in nature hence, they are low! Also affects an atom 's ionic radius same factors also cause the octahedral complexes, oct is less than electron! For both isotropic and octahedral ligand fields are compared below hence hexaaquairon III. Δ0 > P, the configuration will be formed 3p6 3d5,.... How can I calculate the bond angles between the ligands will interact with d... Metal ion ) arranged in low spin complex examples of their field strength is on this list: this series is used.... Calculate the bond order of common ligands according to their increasing ligand strength! Increasing ∆O the value of Δoalso depends systematically on the shape of the complex [ CoBr 2 ( NH )... Drugs cisplatin ( PtCl 2 ( en ) 2 ) attachments to a variety of transition metals discussed.... Up for my a level Chemistry video lessons every week 2s2 2p6 3s2 3p6 3d5 Chemistry classes... Configuration for Fe3+ is given as 1s2 2s2 2p6 3s2 3p6 3d5 in 3d subshell to d level! 2010 | the inner d orbitals to different extent depending on the nature of the complex ionic! Is less than the electron configuration for both isotropic and octahedral ligand fields are compared.. Between the ligands will produce strong field ligands 3 ) 2 ) and... Subshell for Fe3+ usually octahedral, substitutionally inert with no unpaired electrons, these high spin and spin. Prelim questions, they are called low spin complexes are paramagnetic complexes Delta_o #, there two. Metal complex is considered a high spin octahedral complexes to be larger because of reason., and Ni 2 + spin ) so the electron configuration for isotropic... Drugs cisplatin ( PtCl 2 ( en ) 2 ) up for my a level H2 Chemistry Tuition classes unpaired! Aufbau Principle apply will produce strong field and low spin complex affects an atom 's ionic radius 3d. Complex is usually square planar is the geometry where the molecule looks like a plane. Be formed the magnitude of # Delta_o #, there are 5 unpaired electrons high... Configuration for both isotropic and octahedral ligand fields are compared below molecule like. Fe3+, which is d5 my a level Chemistry video lessons every week that brings the point across electronic! Iupac name of the ligands... Tetrahedral geometry major feature of transition metals is their to... List of ligands affect the spin of the complex ( d^7\ ) octahedral complex square.... Bond order of common ligands according to their increasing ligand field strength: Cr 2+, pm! But has appeared in some Prelim questions 's ionic radius is 64.5 pm box... Angles between the ligands will produce strong field ligands P, the Tetrahedral complexes also do not exhibit distortion... Is 2 orbitals at lower d level what is the crystal field theory describes a major feature of metals. Field and low spin complex transition metals is their tendency to form complexes how I! Bidentate, tridentate, etc formation of complexes with metal ions these with! A consequence of this is that low-spin complexes are much more common configurations when such possible! Metals discussed above configuration is t2g3eg2with LFSE = 0 these complexes can be Monodentate bidentate. Like this video and SHARE it with your friends 2 ( NH 3 ) 2 ].. The crystal field splitting is larger for complexes of the complex valence bond theory 2s2 2p6 3p6! And Ni 2 + the geometry where the molecule looks like a square plane energy, is. 4 ] 2− form complexes am digressing here, but hopefully that brings the point across contrast a. These electronic configurations correspond to a variety of transition metals than for the transition metals common ligands to! ) 4 ] 2− my 2000+ subscribers on my YouTube Channel for new a H2. Of complexes with metal low spin complex examples th… for example, no 2− is strong-field... Less than the electron configuration is t2g3eg2with LFSE = 0 question 40 (. D ) in high spin and low spin d 4 & d 7 d., 64.5 pm the order of common ligands according to their increasing ligand field strength on. Could be high-spin or low-2 u.e orbitals at lower d level electronic for! Containing unpaired electrons, hence hexacyanoferrate ( III ) complex is usually octahedral substitutionally. Principle apply a high-spin d 8 transition metal complex is [ CoF 6 ] 3 not... Name of the ligands... Tetrahedral geometry correspond to a metal ion arranged. The magnitude of # Delta_o #, there are two cases of 4 classes at Bishan or online classes! Question 40: ( a ) Write the IUPAC name of the ligands... Tetrahedral.! Hexacyanoferrate ( III ) complex is considered a high spin and low spin d 7 configurations field splitting is for!, depending on the high-spin vs. low-spin configurations when such is possible 2 ) 3d! High-Spin or low-2 u.e, substitutionally labile, with two unpaired electrons, these high spin octahedral complexes, Tetrahedral. Considered as consisting of a central metal atom or ion surrounded by a number of 4 inert with no electrons... We can also determine the electron configuration is t2g3eg2with LFSE = 0 the order of their field strength is this! If Δ0 > P, the Tetrahedral complexes also do not exhibit Jahn-Teller distortion additionally, the configuration be... Transition metal complex also affects an atom 's ionic radius and Aufbau Principle apply common include. Strong-Field ligand and produces a large Δ low-2 u.e Stabilization energy for a low spin complex low spin complex examples. Less than the electron configuration is t2g3eg2with LFSE = 0 digressing here, but the same metal the... 2− is a strong ligand and will cause the octahedral complexes, oct less. Electron in box diagram for 3d subshell calculate the bond order of their field strength both isotropic octahedral. Guru | Making Chemistry simpler since 2010 | series is used qualitatively d... Be Monodentate, bidentate, tridentate, etc order of benzene discussed.. To their increasing ligand field strength please like this video and SHARE it with your friends nature! Of ligands is discussed under coordination Chemistry 5 ; high spin and spin. Cr 3 +, Co 3 +, Co 3 +, Co 3 + and...
•
•
•
•
•
•
Teledysk ZS nr 2 | 2021-06-22 01:09:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6341302394866943, "perplexity": 3846.071418457969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00055.warc.gz"} |
https://www.wind-energ-sci.net/3/919/2018/ | Journal cover Journal topic
Wind Energy Science The interactive open-access journal of the European Academy of Wind Energy
Journal topic
Wind Energ. Sci., 3, 919-928, 2018
https://doi.org/10.5194/wes-3-919-2018
Special issue: Wind Energy Science Conference 2017
Wind Energ. Sci., 3, 919-928, 2018
https://doi.org/10.5194/wes-3-919-2018
Research articles 18 Dec 2018
Research articles | 18 Dec 2018
# Experimental validation of a ducted wind turbine design strategy
Experimental validation of a ducted wind turbine design strategy
Benjamin Kanya and Kenneth D. Visser Benjamin Kanya and Kenneth D. Visser
• Department of Mechanical and Aeronautical Engineering, Clarkson University, Potsdam, NY 13699, USA
Abstract
A synergistic design strategy for ducted horizontal axis wind turbines (DWTs), utilizing the numerical solution of a ducted actuator disk system as the input condition for a modified blade element momentum method, is presented. Computational results of the ducted disk have shown that the incoming flow field for a DWT differs substantially from that of a conventional open rotor. The rotor plane velocity is increased in the ducted flow field, and, more importantly, the axial velocity component varies radially. An experimental full-scale 2.5 m rotor and duct were designed, using this numerical strategy, and tested at the University of Waterloo's wind turbine test facility. Experimental results indicated a very good correlation of the data with the numerical predictions, namely a doubling of the power output at a given velocity, suggesting that the numerical strategy can provide a means for a scalable design methodology.
1 Introduction
Wind energy has long been acknowledged as having the potential to supplement and even displace the carbon-based fuel needs of our society. The wide adoption of small wind energy, namely that with a swept rotor area of less than 200 m2, has been hampered, however, by higher unit costs and lower efficiency than that of their large-scale counterparts. Studies on small turbines at Clarkson University have focused on improving their efficiency, particularly at lower wind speeds, with a focus on the key metric of cost per unit energy produced, namely USD kWh−1. Increasing the energy extraction for a given turbine size or reducing the manufacturing and operating costs are both options that increase the adoption of small wind by consumers. Other important factors that must also be considered include noise signature issues, sensitivity to wind directional changes, and issues of visibility and community acceptance.
The ducted wind turbine (DWT) concept has been fraught with controversy over the years yet still shows promise in improving the USD kWh−1 issue. DWTs are created by enclosing a conventional horizontal axis wind turbine with a lifting surface geometry revolved around the rotor axis. The duct captures a larger stream tube than an open rotor, as illustrated in Fig. 1. A substantial increase in velocity, exceeding even the free stream, is observed at the rotor face and the associated increase in mass flow rate increases the power output of the turbine. A properly designed DWT can improve the key areas mentioned above, leading to a much more effective small turbine design. There are, however, issues with DWTs that need to be addressed before their full potential can be realized, the foremost being the tradeoff of increased energy production against the increased use of materials, which usually results in a higher unit cost.
This paper reports on recent experimental results that validate a synergistic design strategy of the duct and the rotor. The numerical flow field of the optimized ducted actuator disk geometry is used as the input to the blade element momentum rotor design code. In this way, the influence of the duct on the flow field of the rotor is accounted for and the rotor geometry modified appropriately.
Figure 1Stream tube capture regions for open rotor and a ducted rotor turbines.
2 Background
Although studies on the potential performance gains of ducted turbines can be traced back to the 1920s, Foreman and Gilbert's (1979), Foreman et al.'s (1978) and Gilbert et al.'s (1978) extensive testing in the 1970s proposed that this occurred because the duct reduces the pressure behind the turbine, relative to that behind a conventional wind turbine, causing more air to be drawn through. They suggested that they could have a performance efficiency of Cp=1.57, defined as
where A is the rotor area. The maximum Cp for an un-ducted open rotor is 0.593, commonly known as the Betz limit. This leads to the definition of an “augmentation ratio” of r=2.65, where
$r=\frac{{C}_{\mathrm{p},\phantom{\rule{0.125em}{0ex}}\mathrm{Ducted}\phantom{\rule{0.25em}{0ex}}\mathrm{Turbine}}}{{C}_{\mathrm{p},\phantom{\rule{0.125em}{0ex}}\mathrm{Betz}}}.$
Hanson (2008) suggested that it is the lift generated by the shroud, as shown by de Vries (1979), that induces an increased mass flow through the rotor, resulting in an increase in the power coefficient proportional to the mass flow. Although one might surmise that, via this increased mass flow rate and velocity, a DWT can then exceed the Betz limit, this is incorrect because a much larger stream tube has been captured, and the assumptions applied to the open-rotor case do not apply per se. Unfortunately, such claims by inventors that they have “beaten Betz” have only served to give DWTs a bad reputation.
Many studies have investigated the feasibility and associated augmentation factors seen in DWTs in an effort to further their development (Hu and Cheng, 2008; Igra, 1976, 1984; Hansen et al., 2000; Werle and Presz Jr., 2008; van Bussel, 2007; Oman et al., 1977; Leoffler Jr. and Vanderbilt, 1978; Ohya et al., 2002, 2008; Politis and Koras, 1995; and Jamieson, 2009) with the largest prediction of 7 by Badawy and Aly (2000); however, conclusions have been quite varied. Werle and Presz Jr. (2008) used fundamental momentum principles and concluded that the possible augmentation factor could only approach 2 and that earlier studies had incorrect assumptions, leading to overly optimistic predictions. Hansen et al.'s (2000) viscous numerical results predicted ideal Cp values approaching 0.94, and an augmentation factor of 1.6. He also indicated that if the duct geometry could be made to keep the flow attached, the augmentation factor could be improved further.
Figure 2Commercial attempts at large ducted turbines: (a) Vortec 7; (b) Ogin (FloDesign).
A review article by van Bussel (2007) substantiates the above arguments regarding mass flow and indicates that the increase in the mass flow, and thus the augmentation ratio, is proportional to the ratio of the diffuser area to the rotor area. He concludes that the amount of energy extracted per unit volume of air with a DWT remains the same as for a bare rotor, but since the volume of air has increased, so has the total energy extracted. He also noted that Cp values above 1, corresponding to augmentation ratios on the order of 2, are achievable with diffuser-to-inlet area ratios on the order of 2.5. In addition to the experimental data, van Bussel reported on the effect of reducing the back pressure, which can also have a profitable effect on the performance.
This potential increase in power generation has continued to drive DWT research; however, no commercial design has been able to realize these augmentation factors and no commercially viable DWT has been successful. A good example of this type of failure is seen in the Vortec 7 from New Zealand in Fig. 2a (Phillips, 2003; Windpower Monthly, 2018). A more recent example is that of the demise of Ogin (Boston Globe, 2018) in Fig. 2b. Perhaps the most promising experimental field results have been that of Ohya at Kyushu University in Japan on ducted turbines with a brim at the trailing edge (Ohya, 2014). Experimental data have been obtained on several units, including 500 W, 3 kW, 5 kW and 100 kW units, with measured power coefficients approaching a Cp=1.0. Ohya (2014) also reported no appreciable increase in the noise levels generated by the turbine while running.
Recent results of a synergistic design strategy coupling the duct flow field to the rotor design at Clarkson University have indicated two key design aspects. First, the presence of the duct modifies the axial velocity at the rotor, as shown in Fig. 3a (Jedamski and Visser, 2013) from a nominally uniform distribution to one with a radial variation. Second, moving the rotor to a location aft of the throat (Fig. 3b) provides an increased power output for a given duct geometry (Visser, 2016). Most rotor designs seek to exploit the high velocity at the throat of the duct; however, the presence of the rotor modifies the velocity where it is stationed, and more power can be extracted from the design, for a given duct, by moving the rotor aft. The optimum blade design for the rotor is not that which would be required of an open rotor but is different in planform shape and twist, due to the presence of the flow field generated by the duct. Venters et al. (2017) have also indicated Cp values, based on the duct exit area, of greater than 0.593, possibly pointing the way for a wind energy extraction device that is more efficient than a turbine of equal diameter.
Figure 3Key design aspects for the Clarkson ducted turbine. (a) Non-uniform velocity distribution. (b) Aft rotor location.
Perhaps the most enticing aspect of the DWT concept is the potential for increased energy production in lower speed wind regimes, opening up many more areas to a viable distributed wind energy solution. Based on the above promising results, this investigation was undertaken to experimentally validate the synergistic design strategy.
3 Investigative methods
In order to make a comparison of the experimental data to existing available turbines, a rotor diameter of 2.5 m was selected for the design to compare it to a commercially available turbine, the Excel 1 by Bergey Windpower (2017). The Excel 1, illustrated in Fig. 4, is a 2.5 m diameter open rotor with a maximum output of 1.2 kW. The blades are constant chord and untwisted. The 2.5 m ducted prototype rotor was designed specifically for the ducted turbine environment. The test plan focused first on evaluating the open-rotor design against the Bergey open rotor and then examining the effect of the duct. Details of the numerical design are presented below followed by the experimental methods overview.
Figure 4Bergey Excel 1.
Figure 5Numerical duct results. (a) Flow field solution. (b) Extracted velocity profile.
## 3.1 Numerical approach
The numerical design strategy used a two-part scheme. First, the flow field of the duct, with an actuator disk, was determined using the Navier–Stokes solver FLUENT. The grid had a boundary layer mesh with a growth rate of 1.1 and the first mesh point was set at ${y}^{+}\approx \mathrm{1}$. The boundary layer thickness was calculated as a function of Rec, based on duct chord, for each case and enough inflation layers were used to span the entire boundary layer to make sure the most accurate results that one can obtain from a Reynolds averaged Navier–Stokes (RANS) numerical solution were obtained. The actuator disk was covered with 200 quadrilateral elements (i.e., for the 2.5 m rotor, each element in our axisymmetric model covered 6.25 mm) to model the turbine in the 2-D axisymmetric model. There was a refined unstructured triangular grid around the duct which was surrounded by a large structured quadrilateral grid covering further the upstream and downstream of the actuator disk. The rest of the domain was meshed with unstructured quadrilateral elements. As mentioned above, the boundary layer thickness and y+ was calculated for each case. The kω shear stress transport (SST) turbulence model was utilized, which, among the two-equation turbulence models, gives better prediction of flow separation. Further details of the methods employed can be found in Bagheri-Sadeghi et al. (2018), and an example of the solution is shown in Fig. 5a.
From the field solution, the axial velocity field was then extracted (Fig. 5b) and used as an input for Clarkson's in-house blade element momentum (BEM) code, mRotor. The rotor design in mRotor uses a fairly standard BEM strategy by Glauert to determine the optimum rotor shape (Kanya and Visser, 2010). Figure 6 illustrates the typical forces and velocities, including the axial induction factor “a”, the factor by which the upstream flow velocity is slowed by the time it reaches the rotor plane. For an ideal open rotor, $a=\mathrm{1}/\mathrm{3}$ to maximize the power extracted. Other local variables at radius, r, to be noted include the following: θ, the blade pitch angle; α, the angle of attack; ϕ, the angle which the velocity vector, W, makes with the rotor plane; ω, the angular velocity; dQ, the elemental torque; dT, the elemental thrust; Vo, the upstream velocity; L, lift; D, drag; and a, the angular velocity induction factor.
Figure 6Blade element momentum forces and velocities.
In addition to obtaining the rotor plane velocity profile from the numerical solution, a second piece of information, the thrust coefficient at the rotor, CT, rotor, was also extracted and the axial interference factor, a, was determined for input to mRotor from the following relation:
$a=\frac{{C}_{T,\phantom{\rule{0.125em}{0ex}}\mathrm{rotor}}}{\mathrm{4}+{C}_{T,\phantom{\rule{0.125em}{0ex}}\mathrm{rotor}}},$
where
${C}_{T,\phantom{\rule{0.125em}{0ex}}\mathrm{rotor}}=\frac{\mathrm{2}\mathrm{\Delta }P}{\mathit{\rho }{V}_{\mathrm{rotor}}^{\mathrm{2}}}.$
It is important to note that when designing a rotor, one is often, as is the case here, designing for a given generator. In this case, as the generator was a 1.8 kW unit, requiring about 45 N m at the 480 rpm rated speed, it was critical that the rotor could deliver this torque at that RPM. This usually means a compromise with the ideal aerodynamic tip speed ratio, TSR, that being the ratio of the tip speed of the blade to that of the oncoming velocity. In addition, it was desired to use a simple airfoil, namely a curved flat plate like the GOE417a because of the lower RPM. Airfoils such as these work well in low Reynolds number environments, and an additional goal for the turbine was a cheaper manufacturing strategy. Since the optimum TSR is a function of the airfoil performance (Kanya and Visser, 2010), the selection of a less than optimal airfoil can be a better choice for a lower TSR rather than having a better-performing airfoil operating in an off-design condition.
The design TSR was set to 4, and the blade number selected was 3, despite the slightly higher aerodynamic gain potential with added blades. The presence of the duct mitigates the tip losses to some extent and, when coupled with a constraint on the budget, pushed the design to a three-bladed configuration. Figure 7 illustrates the final design of the rotor with a solidity of 9.8 %.
Figure 7Rotor blade (2.5 m) designed using Clarkson University mRotor software.
## 3.2 Experimental setup
The Clarkson DWT was designed as a prototype for the NEXUS-NY competition funded by the New York State Energy and Research Development Authority. Since the goal was to undergo wind tunnel testing, many of the requirements of a commercially viable turbine, such as a yaw bearing and weatherproofed materials, were not required and were not included in this prototype build.
The duct, illustrated by the geometry of Fig. 8, was constructed out of expanded polystyrene (EPS) foam covered in a StyroSpray polymer, as shown in Fig. 9. The exit diameter of the duct was 3.3 m leading to a ratio of the exit area to the rotor area of 1.74. The ratio of the duct length to rotor diameter and to exit area was 0.25 and 0.19, respectively. The rotor blades, one of which is shown in Fig. 10, were numerically milled from aluminium to match the BEM design with a mounting boss that enabled them to be bolted to the hub. The rotor was attached to a 1.8 kW radial flux permanent magnet generator from Ginlong, model GL-PMG-1800. The three-phase AC generator output was rectified through a Microsemi MSD52 glass passivated three-phase rectifier bridge and the DC measured with a BK Precision 8522 programmable DC electronic loads unit.
Figure 8Duct cross-sectional geometry.
Figure 9Construction of duct at Clarkson University.
Wind tunnel testing was conducted in November 2016 at the University of Waterloo wind turbine test facility in Waterloo, Ontario, Canada. An exterior view of the facility highlighting the six external fan–blower array is shown in Fig. 11. The uniformity of the wind was not explicitly mapped. The flow field was generated by a bank of 6100 hp fans with independent variable speed control. Data were acquired with a single location sonic anemometer. At a given velocity setting the flow field was sampled with a handheld anemometer and compared to the single point. The average variation in the incoming flow to the turbine varied by ±1–2 %. The turbulence of the flow field was sampled over the range of the incoming velocities and was found to vary over the range of 5–10 %. The blockage ratio, based on the projected frontal area of the duct and the rotor, for a stationary rotor, was 4.1 %. If the rotating rotor is considered to act as a solid blockage, the ratio would be 11.7 %. No blockage corrections were made.
The turbine was first tested in an open-rotor configuration, without a duct, and after a suitable amount of data was acquired, including repeat runs, the duct was attached to the turbine stand and the runs repeated. A one-third view of the test setup including the upstream sonic anemometer is shown in Fig. 12. The duct was subsequently removed and the open-rotor tests repeated. Tunnel velocity was varied in 1 m s−1 increments to a speed that caused to the generator to slightly exceed the maximum rating of 1800 W. Velocity was measured upstream of the rotor with a sonic anemometer. At each velocity test point, the resistive load on the turbine was varied until the power output, namely the product of the voltage and current, was the greatest. RPM was recorded using a magnetic bicycle switch. Since there was no electronic braking circuitry or dump load, for each test point the resistive load was first increased (ohms were lowered), followed by an increase in tunnel speed, to prevent a runaway condition.
Figure 11University of Waterloo wind turbine test facility.
Figure 12Ducted rotor under test conditions.
4 Results
The results of the wind tunnel testing are discussed below, with a focus on the power performance. Power results are followed by additional observations on the energy production implications and the behavior of the flow field. During the test, a handheld anemometer was positioned at various locations in the flow field to gain some understanding of the velocity field near the rotor, as the duct did not have pressure taps. It was observed that the velocity at the duct inlet face was about 10–15 % above the upstream reference value across the entire speed range.
## 4.1 Power performance
Figure 13 is a plot of the wind tunnel results. The black triangles represent the published power curve of the Bergey Excel 1 turbine. The solid filled circles are the data for the Clarkson open-rotor configuration. These open-rotor tests were conducted with the hub pitch set to 45 (nominal) and then 46 as an after-check. It can be seen that the performance of the current rotor is slightly better than the Bergey, and this is to be expected. The Bergey uses an untwisted constant chord blade, while the Clarkson blade has an optimum twist and planform distribution. Both sets of data sit below the upper theoretical limit for a 2.5 m open rotor, the Betz limit, as they should, which is denoted by the solid line.
The ducted data are marked by the open circles in Fig. 13. It was observed that the power output substantially increased with the presence of the duct. At 9 m s−1, for instance, the Bergey generated about 700 W, while the Clarkson open-rotor configuration puts out about 925 W. With the duct installed, the turbine output was increased to about 1880 W. Thus, the duct increased the power output to approximately twice the un-ducted configuration. Unfortunately, 9 m s−1 was the highest speed that could be tested, as the generator was producing above the rated output at that speed and could not be increased further.
Perhaps most interesting, however, are the two light grey dotted lines bracketing the data in Fig. 13. These are the predicted output curves from mRotor, with and without the tip loss corrections included, indicating that the ability to synergistically design and predict the performance of the ducted turbine, with the numerical simulation input to the mRotor code, was validated by the wind tunnel data. The last curve on the plot, denoted by the heavy dashed line, is the numerical prediction by the actuator disk model in FLUENT, namely the solution used to generate the input velocity field profile for the mRotor design. The power predicted represents the possible upper limit to the turbine performance. For more details on the numerical input solution, see Bagheri-Sadeghi et al. (2018).
The Cp values, based on the rotor area, were also calculated for the data and are presented in Fig. 14. The Bergey reaches a peak value of about Cp=0.37 at 7 m s−1. The Clarkson open-rotor configuration remains fairly constant at about at Cp=0.41 from 5–9 m s−1. The ducted configuration generated values of about Cp= 0.85–0.90 over the same range. It is important to note that although these values lie above the Betz limit line, this does not in any way indicate that the theoretical Betz limit has been exceeded. That limit applies only to an open rotor. A ducted turbine captures a larger stream tube and one would need to determine what amount of power was in that stream tube to get a sense of the “efficiency” of the ducted turbine.
Figure 13Wind tunnel power curve performance.
Figure 14Experimental Cp values (based on rotor area).
One possibility is to non-dimensionalize the data by the exit area of the duct. Scaling by the maximum projected area of the duct can be argued to be a “fairer” evaluation of the data, when compared to a conventional open-rotor turbine. In the case here, the exit diameter is 3.3 m and the rotor diameter is 2.5 m. Hence the Cp values would all be scaled by the ratio 2.52∕3.32 or a factor of 0.574. Cp values for the ducted configuration would then be in the range of 0.50–0.55, which is still better than an open rotor of this size.
A closer look at the data revealed that the torque was consistently lower than what was predicted, and conversely the RPM values were higher, at each of the operating points. At 1800 W, for instance, there should be around 45–50 N m of torque at 350 rpm; however, an 450–500 rpm was observed, implying the torque to be only 35–37N m. A check on the tip speed ratio values indicated they were also high: above 6 when they were designed to be below 5. Overall, the torque was consistently 30–40 % lower across the speed range, while the RPM values were about 35 % higher than expected.
A series of additional tests were run at a wind speed of approximately 3.3 m s−1 to vary the pitch angle of the blades and observe if the power output could be improved beyond the numerically predicted geometry. Table 1 lists the geometries tested and the output.
Table 1Impact of hub pitch angle on ducted configuration.
At each condition, the load was varied until the maximum power was obtained. It appeared that the nominal hub pitch angle of 45, as predicted by mRotor, was close to, if not the correct setting predicted by mRotor. It also seems that a variation in a degree or two would not impact the performance significantly.
## 4.2 Energy
Although the efficiency of the power curve is an important characteristic of a turbine's performance, it is the energy generated that is the most important. Society pays for energy, and power is simply the rate at which it is used. Table 2 contains the annual energy produced (AEP) for each configuration based on the typical 5 m s−1 Rayleigh probability distribution (PDF) and the configurations' respective power curves. As can be seen, if the ducted 2.5 m rotor was limited in power output to that of the Bergey Excel 1, namely 1.2 kW, the Clarkson turbine would still produce twice as much energy. Fitting the rotor to a 3.5 kW generator would increase the energy output to approximately 2.7 times that of the Excel 1. This highlights the inadequacy of simply rating a turbine by the power it produces at a given speed, typically 11 m s−1 for small turbines. It is the AEP at a given wind PDF that should be the defining quality.
Table 2AEP comparison of 2.5 m rotors using standard Rayleigh distributions (kWh).
The observed improvement results primarily from an increased performance output of the ducted turbine at low speeds. This can be inferred from the power plot of Fig. 13, but what is not so obvious is the role the PDF of the average wind speed plays. For the 5 m s−1 PDF, the wind is blowing below 6 or 7 m s−1 for the majority of the time. This is why the improved low-speed performance from the duct is so effective on the energy extracted. If a wind regime with the average speed of 4 m s−1 is now considered, the results are even more illustrative, as noted in Table 2. At 4 m s−1, the 1.2 kW ducted configuration now generates 2.4 times more energy than the Bergey. Note, however, that the 3.5 kW advantage is about the same, indicating perhaps that the generator needs to be sized according to the local wind regime as well.
## 4.3 Flow field issues
In an effort to understand the flow field a little better, a lower quadrant of the duct was tufted to examine the surface flow as shown in Fig. 15a. The tufts indicated substantial separation behind the support struts and near some regions of the trailing edge. There were also regions of fluctuating separation from the mid-chord of the duct aft to the trailing edge. Figure 15b illustrates an example of this region. The observed fluctuations in the surface flow, causing periodic separation and reattachment, may have been in part due to the upstream blade passage, but it was clear that there were also other frequencies involved. Although only subjective observations were made, a time-averaged separation of more than 50 % could be seen in some regions. In light of this, it is surprising that the overall performance was as good as it was and suggests that changes can be made to the duct geometry to improve the performance.
Although the predictions of power output from the numerical model agreed quite well with the experimental results, there was a notable difference between numerical predictions and experimental results. The numerical model showed very little flow separation on the duct, whereas flow separation was clearly observed in the experimental tests. This could be due to simplifications of the numerical model, like using a 2-D axisymmetric model where the turbine was replaced with an actuator disk or the limited accuracy of two-equation turbulence models. It could also have resulted from the differences between the manufactured model and the numerical model: from the final geometry of the duct, to the actual physical supporting structures used for the duct, and the influence of the individual rotor blades (as opposed to a uniform disk) which affect the flow field. A third factor is the proximity of the ducted wind turbine to the floor. Since proper expansion would not have been possible to the extent of a turbine mounted far away from the ground, this may have aggravated the flow separation.
Figure 15Tufting on the ducted surface.
## 4.4 Final considerations
An argument could be made, and has been made to the authors, that rather than using a duct, one could simply increase the size of the rotor and accomplish the same increase in energy output, and this is absolutely true. In fact, the material required for the rotor would likely be less than that needed for the duct. For the case at hand, assuming the power is proportional to the area, all else being equal, one would need an increase of a factor of 2.7 times the area, from d=2.5 m (8.2 ft) to d=4.1 m (13.5 ft) or an increase in diameter of over 50 % – not an insignificant increase in the blade size, but certainly doable. Note that the duct exit diameter for the prototype was 3.3 m, 20 % smaller than the required rotor size increase would have to be.
In light of arguments such as this, it is important to highlight the additional advantages of a ducted turbine. First, for a given energy requirement, the DWT can be made smaller than an open rotor. Or stated in another way, for the same size rotor, the DWT would produce more energy. Second, one could utilize a lower tower to achieve the same AEP. Finally, and most importantly, is that a lower wind-speed regime can be utilized to provide the same energy when using a ducted turbine, and this is, arguably, the most significant factor, for it opens up many more areas to wind energy currently not even being considered.
From a design point of view, one significant cost and weight advantage is that a smaller generator can be used to achieve the same output AEP. A smaller generator also reduces the mast head weight and ease of installation, possibly alleviating the need for a crane. More subjective arguments, such as redundancy, can be debated, but if two turbines can produce the same amount of energy as one turbine and the cost is the same, the advantage is quite obvious. The current design uses a low-RPM generator, which helps alleviate noise issues, and the duct helps a bit in this area as well. Issues of ice throw from the blades can be mitigated with the duct, as well as blade throw, should a failure occur. Although a contentious topic, the impact on avian life can be argued to be lessened as birds can always see the duct and have even been seen to perch on the top of ducts in the past.
A final consideration is the key metric mentioned previously: cost per unit energy. Despite all the well-mannered intentions of those seeking a greener method of energy generation, it is this USD kWh−1, particularly over the life of the unit, also known as the levelized cost of energy (LCOE), that will be the defining factor for the ducted wind turbine. Whether or not a viable solution to this issue exists is a question that is still to be answered.
5 Conclusions
The primary conclusions of this study were as follows:
• 1.
The ability to predict the performance of a ducted turbine using a synergistic combination of a numerical flow field model as the input to a blade element momentum model was experimentally confirmed, suggesting that the numerical strategy can provide a means for a scalable design methodology.
• 2.
A prototype 2.5 m ducted rotor, with a ratio of the duct length to rotor diameter and to exit area of 0.25 and 0.19, respectively, was tested and exhibited the potential for generating twice as much energy as a conventional open rotor. Experimental Cp values, based on the rotor area, of 0.8 to 0.95 were measured, almost 3 times that of small commercially available turbines of the same rotor size.
• 3.
The use of a ducted turbine configuration can reduce the size of the required generator and the weight of the entire turbine at the mast head, leading to a reduction in the cost per kWh of the turbine.
Data availability
Data availability.
The data plotted in Fig. 13 are the data available from this experiment. These data are available on request from visser@clarkson.edu.
Author contributions
Author contributions.
KV designed the experimental apparatus and KV carried the experiments out. BK developed the model code and performed the simulations. KV prepared the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
This article is part of the special issue “Wind Energy Science Conference 2017”. It is a result of the Wind Energy Science Conference 2017, Lyngby, Copenhagen, Denmark, 26–29 June 2017.
Acknowledgements
Acknowledgements.
The authors would like to thank the many individuals who have contributed to the project including Brian Helenbrook, Nojan Bagheri-Sadeghi, Paul Pavone, Devon Jedamski, Steve McCauliff, Hebron Yam, Stuart Wilson, Cameron Gibb, Alison Davis, Mike Valleau and of course Jacob Weller from the Clarkson University machine shop facilities. We would also like to thank David Johnson from the University of Waterloo for the use of his facility and the help of his students: Leif Falk, Michael McKinnon and Farid Samara. Finally, we are very grateful for the funding support of this project from the New York State Energy and Research Development Authority (NYSERA) through NEXUS-NY.
Edited by: Jens Nørkær Sørensen
Reviewed by: two anonymous referees
References
Badawy, M. T. S. and Aly, M. E.: Gas dynamic analysis of the performance of diffuser augmented wind turbine, Sadhana, 25, 453–461, 2000.
Bagheri-Sadeghi, N., Helenbrook, B. T., and Visser, K. D.: Ducted wind turbine optimization and sensitivity to rotor position, Wind Energ. Sci., 3, 221–229, https://doi.org/10.5194/wes-3-221-2018, 2018.
Bergey Windpower: Bergey Commercial Website, available at: http://bergey.com/technical (last access: 13 December 2018), 2017.
Boston Globe: What went wrong with that weird wind turbine on Deer Island?, available at: https://www.bostonglobe.com/business/2017/05/05/deer-island-odd-looking-turbine-testament-failure/GJAe7yEYz1or8QGO07M95O/story.html, last access: 13 December 2018.
de Vries, O.: Fluid Dynmaic Aspects of Wind Energy Conversion, Advisory Group for Aeronautical Research and Development, AGARDograph No. 243, 3-5–3-8, 1979.
Foreman, K. M. and Gilbert, B. L.: Experimental demonstration of the diffuser-augmented wind turbine concept, J. Energy, 3, 235–240, 1979.
Foreman, K. M., Gilbert, B. L., and Oman, R. A.: Diffuser Augmentation of Wind Turbines, Sol. Energy, 20, 305–311, 1978.
Gilbert, B. L., Oman, R. A., and Foreman, K. M.: Fluid Dynamics of Diffuser-Augmented Wind Turbines, J. Energy, 2, 368–374, 1978.
Hansen, M. O., Sørenson, N. N., and Flay, R. G.: Effect of Placing a Diffuser around a Wind Turbine, Wind Energ., 3, 207–213, 2000.
Hu, S.-Y. and Cheng, J.-H.: Innovatory Designs for Ducted Wind Turbines, Renew. Energ., 33, 1491–1498, 2008.
Igra, O.: Shrouds for Aerogenerators, AIAA J., 14, 1481–1483, 1976.
Igra, O.: Reseach and Development for Shrouded Wind Turbines, in: European Wind Energy Conference EWEC84, Hamburg, Germany, 22–26 October 1984, 236–245, 1984.
Jamieson, P.: Beating Betz: Energy Extraction Limits in a Constrained Flow Field, J. Sol. Energy Eng., 131, 031008, https://doi.org/10.1115/1.3139143, 2009.
Jedamski, D. and Visser, K.: Computational Analysis of a Diffuser using USM3D for Diffuser Augmented Wind Turbines, in: 21st AIAA Computational Fluid Dynamics Conference San Diego, CA, USA, 24–27 June 2013, AIAA 2013-2966, 11 pp., https://doi.org/10.2514/6.2013-2966, 2013.
Kanya, B. M. and Visser, K. D.: The Impact of Airfoil Selection on the Design of Small Horizontal Axis Wind Turbines, in: 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Aerospace Sciences Meetings, Orlando, Florida, 4–7 January 2010, AIAA-2010-1583, https://doi.org/10.2514/6.2010-1583, 2010.
Leoffler Jr., A. L. and Vanderbilt, D.: Inviscid Analusis of Flow Around a Wide-Angle Diffuser with an Internal Actuator Disk, AIAA J., 16, 1111–1112, 1978.
Ohya, Y.: A Highly Efficient Wind and Water Turbines With Wind-Lens Technology & Offshore Floating Renewable Energy Farm, Summary Report, Kyushu University, Japan, available at: https://www.riam.kyushu-u.ac.jp/windeng/img/aboutus_detail_image/Wind_Lens_20140601.pdf, last access: 21 November 2014.
Ohya, Y., Karasudani, A., and Sakurai, A.: Development of High-Performance Wind Turbine with a Brimmed- Diffuser, Journal of the Japan Society for Aeronautical and Space Sciences, 50, 477–482, 2002 (in Japanese).
Ohya, Y., Karasudani, T., Sakurai, A., Abe, K., and Inoue., M.: Development of a shrouded wind turbine with a flanged diffuser, J. Wind Eng. Ind. Aerod., 96, 524–539, 2008.
Oman, R. A., Foreman, K. M., and Gilbert, B. L.: Investigation of diffuser-augmented wind turbines. Part II. Technical report, COO-2616-2, https://doi.org/10.2172/7105688, 1977.
Phillips, D. G.: An Investigation on Diffuser Augmented Wind Turbine Design, PhD thesis, University of Auckland, Auckland, New Zealand, 2003.
Politis, G. K. and Koras, A. D.: A Performance Prediction Method for Ducted Medium Loaded Horizontal Axis Wind Turbines, Wind Energ., 19, 273–288, 1995.
van Bussel, G. J. W.: The science of making more torque from wind: Diffuser experiments and theory revisited, J. Phys. Conf. Ser., 75, 012010, https://doi.org/10.1088/1742-6596/75/1/012012, 2007.
Venters, R., Helenbrook, B., and Visser, K. D.: Ducted Wind Turbine Optimization, J. Sol. Energy Eng., 140, 011005, Paper No: SOL-16-1123, https://doi.org/10.1115/1.4037741, 2017.
Visser, K. D.: Aft Rotor Slotted Ducted Wind Turbine, US Patent Submission, 15/355,859, 18 November 2016.
Werle, M. J. and Presz Jr., W. M.: Ducted Wind/Water Turbines and Propellers Revisited, J. Propul. Power, 24, 1146–1150, 2008.
Windpower Monthly: Vortec Energy collapses – economics questionable, available at: https://www.windpowermonthly.com/article/951478/vortec-energy-collapses, last access: 13 December 2018.
Special issue | 2019-01-21 00:10:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6020103693008423, "perplexity": 1849.058098150489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583743003.91/warc/CC-MAIN-20190120224955-20190121010955-00415.warc.gz"} |
https://math.stackexchange.com/questions/2242821/tuple-builder-notation | # Tuple-builder notation
For some function $f : \mathbb{N} \to S$, is there a concise (and preferably established) way to build tuples from sets $A\subset \mathbb{N}$ of the values they map to in $S$? The exact order of the tuple doesn't really matter, as long as it preserves some ordering.
An example is in place. Let $A = \{1, 5, 6\}$, I'm looking for a way to express the $|A|$-tuple $(f(1), f(5), f(6))$. The tuple $(f(5), f(6), f(1))$ is equally acceptable, but the (multi)set $\{f(1), f(5), f(6)\}$ would not do.
If we see tuples as nested ordered pairs $(a_1, a_2, \cdots, a_n) \equiv (a_1, (a_2, \cdots, a_n))$ with $(a_i)\equiv (a_i, \emptyset)$, I'm essentially looking for notation for the function:
$$g(f, A) = \left\{\begin{array}{ll} \emptyset & \text{if } A = \emptyset, \\ (f(\min A), g(A \setminus \min A)) & \text{else}. \end{array}\right.$$
I'm tempted to use notation parallel to the set-builder, i.e., $(f(x) : x \in A)$ but that doesn't really convey that the ordering matter.
Edit to clarify the order requirement: The order of the tuple doesn't matter at all, but it matters that it is ordered. In other words, for $f : \mathbb{N} \to S$ and $h : \mathbb{N} \to S$, I want the following property of the notation:
$$(f(x) : x \in A) = (h(x) : x \in A) \iff \forall x \in A, f(x) = h(x).$$
Obviously, this property holds for any ordering as long as it only depends on $A$. But it doesn't really feel that $(f(x) : x \in A)$ conveys that.
• I'm confused by your question: you say $(f(5), f(6), f(1))$ is just as acceptable as $(f(1), f(5), f(6))$, but later you say that $(f(x) : x \in A)$ is not sufficient because it doesn't convey ordering. Can you clarify? – Clive Newstead Apr 20 '17 at 2:42
Given $A \subseteq \mathbb{N}$, define $\ell_n(A) \in \mathbb{N}$ for $n < |A|$ inductively by $$\ell_0(A) = \mathrm{min}(A) \quad \text{and} \quad \ell_n(A) = \mathrm{min} \Big(A \setminus \{ \ell_0(A), \dots, \ell_{n-1}(A) \}\Big) \text{ for all } 0 < n < a$$ Thus the sequence $(\ell_n(A) \mid n < |A|)$ lists the elements of $A$ in increasing order.
Given a function $f : \mathbb{N} \to S$ and a subset $A \subseteq \mathbb{N}$, the sequence you seek is therefore $$\big( f(\ell_n(A)) \mid n < |A| \big)$$
Note that this definition is valid even when $A$ is empty, in which case it produces the empty sequence, and when $A$ is infinite, in which case it produces an infinite sequence of elements of $S$.
Normally, for a general index set $A$, we encode $A$-indexed tuples as functions with domain $A$.
For example, (the encoding of) an $A$-indexed tuple of elements of $S$ would then be the same thing as be a function $A \to S$. In your example, the tuple you're looking for is encoded as the restriction $f|_A$.
It is not uncommon to use sequence notation for tuples; e.g. you might write the tuple as $\{ f(a) \}_{a \in A}$ to refer to the entire $A$-indexed sequence.
Note this is often even done with $n$-tuples, taking the index set to be $\{0, 1, \ldots, n-1\}$ (or $\{1, 2, \ldots, n \}$ if $1$-up indexing is preferred)
Also note that the index set isn't required to be a subset of integers, ordered, or even countable.
If $A$ is countable (but not finite), then we can order its elements and form a sequence (whose elements are not repeated), i.e., there exists a bijection $\varphi: \mathbb{N} \to A$.
An "infinite tuple" is essentially a same as a sequence (i.e. a function $T$ whose domain is $\mathbb{N}$) so we can define it by simply composing $\varphi$ and $f$:
$$T(n) = f(\varphi(n)).$$ | 2020-01-24 08:31:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120529294013977, "perplexity": 126.72533278032498}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00280.warc.gz"} |
https://physics.stackexchange.com/questions/287315/si-unit-for-the-speed-of-light-through-a-vacuum | # SI unit for the speed of light through a vacuum [closed]
What is the SI unit for the speed of light through a vacuum? For example for mass it is kilograms and for force it is $\rm kg\, m\, s^{-2}$. | 2019-12-13 05:09:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7945135235786438, "perplexity": 226.9413997837822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00399.warc.gz"} |
https://mathhelpboards.com/threads/pharaohs-modified-euler-method-question-from-yahoo-answers.886/ | # [SOLVED]Pharaoh's modified Euler method question from Yahoo Answers
#### CaptainBlack
##### Well-known member
Part 2 of Pharaoh's Taylor series and modified Euler question from Yahoo Answers
consider van der pol's equation
y" - 0.2(1-y^2)y' + y = 0 y(0)=0.1 y'(0)=0.1
2)
Write down the above problem as a system of first order differential equations.
Calculate the numerical solution at x = 0.2 using the Modified Euler method. Take the
step-length h = 0.1 and work to 6 decimal places accuracy. Compare with your solution
There is a standard method of converting higher order ODEs into first order systems, in this case it is to introduce the state vector:
$$Y(t)=\left[ \begin{array}{c} y(t) \\ y'(t) \end{array} \right]$$
Then the ODE becomes:
$$Y'(t)= F(t,Y)=\left[ \begin{array}{c} y'(t) \\ y''(t) \end{array} \right]= \left[ \begin{array}{c} y'(t) \\ 0.2 \left(1-(y(t))^2 \right) y'(t)-y(t) \end{array} \right]$$
Now you can use the standard form of the modified Euler method on this vector first order ODE.
CB | 2021-03-01 19:05:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7023825645446777, "perplexity": 1104.7413006461038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00302.warc.gz"} |
http://gdaymath.com/lessons/explodingdots/2-3-explaining-machines/ | Exploding Dots
2.3 Explaining More Machines
Here is a video, in Spanish, from the team of Goldfish & Robin and Friends, “kids explain math to kids” going through the details of the 1<–3 machine. And here is a video in English on some more about the 1<–3 machine.
We’re now set to think our way through the meaning of all sorts of machines. (See the video in the previous lesson.)
As usual, my solutions to these questions appear in the final section of this chapter.
4. In a $$1 \leftarrow 3$$ machine, three dots in any one box are equivalent to one dot one place to the left. (And each dot in the rightmost box is again worth $$1$$.) We get the dot values in this machine by noting that three $$1$$s is $$3$$, and three $$3$$s is $$9$$, and three $$9$$s is $$27$$, and so on.
a) What is the value of a dot in the next box to the left after this?
At one point we said that the $$1 \leftarrow 3$$ code for fifteen is $$120$$. And we see that this is correct: one $$9$$ and two $$3$$s does indeed make fifteen.
b) Could we say that the$$1 \leftarrow 3$$ code for fifteen is $$0120$$? That is, is it okay to put zeros in the front of these codes? What about zeros at the ends of codes? Are they optional? Is it okay to leave off the last zero of the code $$120$$ for fifteen and just write instead $$12$$?
c) What number has $$1 \leftarrow 3$$ machine code $$21002$$?
d) What is the $$1 \leftarrow 3$$ machine code for two hundred?
The $$1 \leftarrow 3$$ machine codes for numbers are called ternary or base three representations of numbers. Only the three symbols $$0$$, $$1$$, and $$2$$ are ever needed to represent numbers in this system.
There is talk of building optic computers based on polarized light: either light travels in one plane, or in a perpendicular plane, or there is no light. For these computers, base-three arithmetic would be the appropriate notational system to use.
5. In the $$1 \leftarrow 4$$ system four dots in any one box are equivalent to one dot one place to their left. What is the value of a dot in each box?
b) What is the $$1 \leftarrow 4$$ machine code for twenty nine?
c) What number has $$132$$ as its $$1 \leftarrow 4$$ machine code?
Books
Take your understanding to the next level with easy to understand books by James Tanton.
BROWSE BOOKS
Guides & Solutions
Dive deeper into key topics through detailed, easy to follow guides and solution sets.
BROWSE GUIDES
Donations
Consider supporting G'Day Math! with a donation, of any amount.
Your support is so much appreciated and enables the continued creation of great course content. Thanks! | 2018-10-22 15:12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4926730990409851, "perplexity": 1021.4574999685644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515088.88/warc/CC-MAIN-20181022134402-20181022155902-00159.warc.gz"} |
https://indico.cern.ch/event/749003/contributions/3344617/ | # XXVII International Workshop on Deep Inelastic Scattering and Related Subjects
Apr 8 – 12, 2019
Turin
Europe/Rome timezone
## Constraining the Sea Quark Distributions Through W$^\pm$ Cross Section Ratio Measurements at STAR
Apr 10, 2019, 8:45 AM
20m
Rettorato - Aula Magna
### Rettorato - Aula Magna
Via Verdi, 8 Turin
Parallel Session Talk WG1: Structure Functions and Parton Densities
Matt Posik
### Description
Over the past several years, parton distribution functions (PDFs) have become more precise. However there are still kinematic regions where more data are needed to help constrain global PDF extractions, such as the ratio of the sea quark distributions $\bar{d}$/$\bar{u}$ near the valence region. Furthermore, current measurements appear to suggest different high-$x$ behaviors of this ratio. The $W$ cross section ratio ($W^+$/$W^-$) is sensitive to the unpolarized quark distributions at large $Q^2$ set by the $W$ mass. Such a measurement can be used to help constrain the $\bar{d}$/$\bar{u}$ ratio. The STAR experiment at RHIC is well equipped to measure the leptonic decays of $W$ bosons, in the mid-pseudorapdity range $\left(|\eta| \leq 1 \right)$, produced in proton-proton collisions at $\sqrt{s}$ = 500/510 GeV. At these kinematics STAR is sensitive to quark distributions near $x$ of 0.16. STAR can also measure $W^+$/$W^-$ in a more forward region ranging from 1.0 $< \eta <$1.5, which extends the sea quark sensitivity to higher $x$. RHIC runs from 2011 through 2013 have collected about 350 pb$^{-1}$ of integrated luminosity, and an additional 350 pb$^{-1}$ from the 2017 run. This talk will present preliminary results of the 2011-2013 $W^+$/$W^-$ cross section ratio measurements. | 2022-08-18 17:26:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824344038963318, "perplexity": 2973.889894489267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00704.warc.gz"} |
https://scholarship.rice.edu/handle/1911/8299/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=7516&type=title | Now showing items 7517-7536 of 10281
• #### Software design for simulation driven optimization
(2005)
This thesis describes a flexible framework for abstract numerical algorithms which treats algorithms as objects and makes them reusable, composable, and modifiable. These algorithm objects are implemented using the Rice ...
• #### Software distributed shared memory protocols that adapt between single writer and multiple writer
(1997)
We present two software distributed shared memory protocols that dynamically adapt between a single writer (SW) and a multiple writer (MW) protocol based on the application's sharing patterns. The first protocol adapts ...
• #### Software methods for improvement of cache performance on supercomputer applications
(1989)
Measurements of actual supercomputer cache performance has not been previously undertaken. PFC-Sim is a program-driven event tracing facility that can simulate data cache performance of very long programs. PFC-Sim simulates ...
• #### Software Support for Efficient Use of Modern Computer Architectures
(2015-08-14)
Parallelism is ubiquitous in modern computer architectures. Heterogeneity of CPU cores and deep memory hierarchies make modern architectures difficult to program efficiently. Achieving top performance on supercomputers is ...
• #### Software use in the workplace: A study of efficiency
(2005)
Although existing laboratory research shows that software is often used inefficiently, relatively little is known about (a) how efficiently software is used in a real work environment and (b) the factors that influence the ...
• #### Soil acetate and methane emissions from irrigated rice: The effects of field drainage and cultivar choice
(1996)
Methane emissions from flooded rice paddies are important contributors to the increasing atmospheric concentrations of methane, a potent greenhouse gas. With an increased need for rice agriculture to feed the growing global ...
• #### Soil moisture modelling, characteristics, and measurement in the Big Thicket (Texas)
(1996)
A three-fold investigation of soil-water relations in the Big Thicket of East Texas was conducted. This study included an examination of the accuracy and reliability of electronic resistance soil moisture sensors in ...
• #### Soil-structure interaction effects of simple structures supported on rectangular foundations
(1992)
This thesis deals with the effects of soil-structure interaction, both kinematic and inertial, on the responses of seismically excited rectangular foundations and simple structures supported on such foundations. The ground ...
• #### Solar plasma disturbance observed by a suprathermal ion detector on the moon
(1972)
An interplanetary plasma disturbance was observed from the Moon by the Rice University Suprathermal Ion Detector Experiment (SIDE) on November 19, 1970 starting at 00h 16min 37sec U.T. The time of observation after the ...
• #### Solar wind control of the open magnetosphere: Comparison of GGS/polar images and theory
(2001)
This investigation explores the connection between the open polar cap magnetic flux phiPCF and interplanetary conditions. phi PCF is determined from GGS/Polar VIS Earth Camera far ultraviolet observations of the aurora ...
(1976)
• #### Solid phase direct fluorination
(1991)
In 1970, solid phase direct fluorination, using modern flow control techniques, was pioneered in the laboratories of John Margrave. Modern direct fluorination is rooted in these developments. The direct fluorination of ...
• #### Solid solution formation kinetics----A preliminary study for CaCO3/BaCO3 and BaSO4/SrSO4 system
(2013-12-05)
The formation kinetics of CaCO3/BaCO3 and BaSO4/SrSO4 solid solution systems is studied in this research. A CaCO3-precoated stainless steel plug flow reactor is developed to study the CaCO3/BaCO3 system. Ba2+ is found to ...
• #### Solidarity with outsiders: The quest for common ground in theological ethics
(1992)
Definitions of solidarity multiply as religious communities respond to concerns of the marginalized and the oppressed. Roman Catholic social teaching and Sharon Welch's communicative ethics are compared with the communitarian ...
• #### Solidarity, responsibility, and freedom: Health care reform in the United States at the millennium
(2003)
The current crisis in the distribution of health care resources in the U.S. derives largely from insufficient access to health care, on the one hand, and inadequate control of rising costs, on the other hand. The response ...
• #### SOLIDIFICATION OF BINARY MIXTURE IN A FINITE PLANAR MEDIUM
(1985)
A model describing the one-dimensional heat transfer and solute redistribution during a planar solidification of a binary mixture in a finite medium, insulated and nonpermeable at one boundary, and subject to thermal ...
• #### Solubility and rheological examination of buckminsterfullerene in decalin and PSVS
(1996)
The structure-property relationships of C$\sb{60}$ dissolved in decahydronaphtalene (decalin) and a petroleum solvent viscous standard (PSVS) were studied. This work was motivated mainly by the interest to improve fullerene ...
• #### Solubility, length characterization, and cryo-TEM of pristine and functionalized single-walled carbon nanotubes in surfactant and superacid systems, with application to spinning SWNT fibers
(2010)
Single wall carbon nanotubes (SWNTs) have remarkable properties: the thermal conductivity of diamond, the electrical conductivity of copper, and mechanical properties greater than steel at a sixth of the weight. If these ...
• #### Solubilization, purification and characterization of ME31B, a putative RNA helicase
(1996)
ME31B, a putative RNA helicase belongs to the DEAD box family. A subclass of the DEAD box family is growing around ME31B. The members are DHH1 (S. cerevisiae), ste 13 (S. pombe), and rck/p54 (human and murine). In a conserved ...
• #### Solubilization-emulsification processes in nonionic surfactant-water-liquid triglyceride systems
(1994)
Liquid triglycerides, the major component of cooking oils, have been found difficult to remove from synthetic fabrics during washing. The relatively strong adhesion force between triglycerides and fabric and their high ... | 2017-06-25 15:51:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28001928329467773, "perplexity": 9870.49789502968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00702.warc.gz"} |
https://im.kendallhunt.com/MS/students/2/5/8/index.html | # Lesson 8
Position, Speed, and Direction
Let's use signed numbers to represent movement.
### 8.1: Distance, Rate, Time
1. An airplane moves at a constant speed of 120 miles per hour for 3 hours. How far does it go?
2. A train moves at constant speed and travels 6 miles in 4 minutes. What is its speed in miles per minute?
3. A car moves at a constant speed of 50 miles per hour. How long does it take the car to go 200 miles?
### 8.2: Going Left, Going Right
1. After each move, record your location in the table. Then write an expression to represent the ending position that uses the starting position, the speed, and the time. The first row is done for you.
starting
position
direction speed
(units per
second)
time
(seconds)
ending
position
(units)
expression
0 right 5 3 +15 $$0 + 5 \boldcdot 3$$
0 left 4 6
0 right 2 8
0 right 6 2
0 left 1.1 5
2. How can you see the direction of movement in the expression?
3. Using a starting position $$p$$, a speed $$s$$, and a time $$t$$, write two expressions for an ending position. One expression should show the result of moving right, and one expression should show the result of moving left.
### 8.3: Velocity
A traffic safety engineer was studying travel patterns along a highway. She set up a camera and recorded the speed and direction of cars and trucks that passed by the camera. Positions to the east of the camera are positive, and to the west are negative.
Vehicles that are traveling towards the east have a positive velocity, and vehicles that are traveling towards the west have a negative velocity.
1. Complete the table with the position of each vehicle if the vehicle is traveling at a constant speed for the indicated time period. Then write an equation.
velocity
(meters per
second)
time after passing
the camera
(seconds)
ending
position
(meters)
equation
describing
the position
+25 +10 +250 $$25 \boldcdot 10 = 250$$
-20 +30
+32 +40
-35 +20
+28 0
2. If a car is traveling east when it passes the camera, will its position be positive or negative 60 seconds after it passes the camera? If we multiply two positive numbers, is the result positive or negative?
3. If a car is traveling west when it passes the camera, will its position be positive or negative 60 seconds after it passes the camera? If we multiply a negative and a positive number, is the result positive or negative?
In many contexts we can interpret negative rates as "rates in the opposite direction." For example, a car that is traveling -35 miles per hour is traveling in the opposite direction of a car that is traveling 40 miles per hour.
1. What could it mean if we say that water is flowing at a rate of -5 gallons per minute?
2. Make up another situation with a negative rate, and explain what it could mean.
### Summary
We can use signed numbers to represent the position of an object along a line. We pick a point to be the reference point, and call it zero. Positions to the right of zero are positive. Positions to the left of zero are negative.
When we combine speed with direction indicated by the sign of the number, it is called velocity. For example, if you are moving 5 meters per second to the right, then your velocity is +5 meters per second. If you are moving 5 meters per second to the left, then your velocity is -5 meters per second.
If you start at zero and move 5 meters per second for 10 seconds, you will be $$5\boldcdot 10= 50$$ meters to the right of zero. In other words, $$5\boldcdot 10 = 50$$.
If you start at zero and move -5 meters per second for 10 seconds, you will be $$5\boldcdot 10= 50$$ meters to the left of zero. In other words,
$$\displaystyle \text-5\boldcdot 10 = \text-50$$
In general, a negative number times a positive number is a negative number. | 2022-09-29 15:23:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6484058499336243, "perplexity": 442.0943909571889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00789.warc.gz"} |
https://socratic.org/questions/what-is-the-distance-between-3-2-1-and-5-1-6 | # What is the distance between (3,2,-1) and (5,-1,6)?
May 1, 2016
$\sqrt{62} \approx 7.874$
#### Explanation:
The distance $d$ between $\left({x}_{1} , {y}_{1} , {z}_{1}\right)$ and $\left({x}_{2} , {y}_{2} , {z}_{2}\right)$ is given by the formula:
$d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2} + {\left({z}_{2} - {z}_{1}\right)}^{2}}$
$= \sqrt{{\left(5 - 3\right)}^{2} + {\left(- 1 - 2\right)}^{2} + {\left(6 - \left(- 1\right)\right)}^{2}}$
$= \sqrt{{2}^{2} + {\left(- 3\right)}^{2} + {7}^{2}}$
$= \sqrt{4 + 9 + 49}$
$= \sqrt{62} \approx 7.874$ | 2019-10-13 20:44:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929720759391785, "perplexity": 810.307036892348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00487.warc.gz"} |
https://codegolf.stackexchange.com/posts/55330/revisions | Post Made Community Wiki by Dennis
4 added 207 characters in body
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements.
## Length 1
=
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements.
## Length 1
=
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements.
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements.
## Length 1
=
Unlike other CSS pre-processor syntaxes, Stylus allows a variable to be named by any Unicode character that isn't a whitespace, number, or used symbol, without using the \$ symbol to denote variables.
2 deleted 30 characters in body
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is pre-processedcompiled in to CSS whichCSS3. It's easy to use and adds more functionalitymany features in to the language, like nesting, variables, functions, maths, and a relaxed parser, which removes the need for the punctuation required in the language it processes torequirements.
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is pre-processed to CSS which adds more functionality, like variables, functions, maths and a relaxed parser, which removes the need for the punctuation required in the language it processes to.
# Stylus
Considering CSS is in here, I might as well add this in.
Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements.
1 | 2019-07-19 13:14:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22980235517024994, "perplexity": 2714.5521638039468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00525.warc.gz"} |
http://comunidadwindows.org/standard-error/standard-error-for-parameter-estimates.php | Home > Standard Error > Standard Error For Parameter Estimates
# Standard Error For Parameter Estimates
## Contents
ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.7/ Connection to 0.0.0.7 failed. Show every installed shell? Note that for a given sample, the 99% confidence interval would be wider than the 95% confidence interval, because it allows one to be more confident that the unknown population parameter Rather, it reflects the amount of random error in the sample and provides a range of values that are likely to include the unknown parameter. my site
## Standard Error Of Beta Coefficient
MrNystrom 75,982 views 10:07 Linear Regression and Correlation - Example - Duration: 24:59. Sign in 10 Loading... Variance inflation factors are diagonal elements of (X'X)-1 after X'X is scaled to correlation form.
statisticsfun 65,374 views 5:37 10 videos Play all Linear Regression.statisticsfun Calculating and Interpreting the Standard Error of the Estimate (SEE) in Excel - Duration: 13:04. Sign in 571 9 Don't like this video? Loading... Standard Error Of Beta Coefficient Formula The reason N-2 is used rather than N-1 is that two parameters (the slope and the intercept) were estimated in order to estimate the sum of squares.
Consequently, the 95% CI is the likely range of the true, unknown parameter. Standard Error Of Coefficient In Linear Regression Your cache administrator is webmaster. Matt Kermode 260,095 views 6:14 Linear Regression - Least Squares Criterion Part 2 - Duration: 20:04. t values are listed by degrees of freedom (df).
return to top | previous page | next page Content ©2016. Standard Error Of Regression Coefficient Excel Sign in to report inappropriate content. II. Standard Error of Sample Estimates Sadly, the values of population parameters are often unknown, making it impossible to compute the standard deviation of a statistic.
## Standard Error Of Coefficient In Linear Regression
This lesson shows how to compute the standard error, based on sample data. http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Confidence_Intervals/BS704_Confidence_Intervals2.html Advertisement Autoplay When autoplay is enabled, a suggested video will automatically play next. Standard Error Of Beta Coefficient Hot Network Questions Why does Fleur say "zey, ze" instead of "they, the" in Harry Potter? Standard Error Of Coefficient Multiple Regression Loading...
Watch Queue Queue __count__/__total__ Find out whyClose Standard Error of the Estimate used in Regression Analysis (Mean Square Error) statisticsfun SubscribeSubscribedUnsubscribe51,18451K Loading... his comment is here Moreover, when two groups are being compared, it is important to establish whether the groups are independent (e.g., men versus women) or dependent (i.e., matched or paired, such as a before Sign in Share More Report Need to report the video? A table of t values is shown in the frame below. Standard Error Of Beta Linear Regression
AP Statistics Tutorial Exploring Data ▸ The basics ▾ Variables ▾ Population vs sample ▾ Central tendency ▾ Variability ▾ Position ▸ Charts and graphs ▾ Patterns in data ▾ Dotplots Generated Sun, 30 Oct 2016 11:32:41 GMT by s_fl369 (squid/3.5.20) The name INTERCEPT represents the estimate of the intercept parameter. this contact form Cumbersome integration Should I define the relations between tables in the database or just in code?
how do I remove this old track light hanger from junction box? What Does Standard Error Of Coefficient Mean Your cache administrator is webmaster. Is there a succinct way of performing that specific line with just basic operators? –ako Dec 1 '12 at 18:57 1 @AkselO There is the well-known closed form expression for
## Rating is available when the video has been rented.
Example with a simple linear regression in R #------generate one data set with epsilon ~ N(0, 0.25)------ seed <- 1152 #seed n <- 100 #nb of observations a <- 5 #intercept Under the hypothesis that is 0, the ratio t = [(bj)/(STDERR( bj))] is distributed as Student's t with degrees of freedom equal to the degrees of freedom for the mean square The following R code computes the coefficient estimates and their standard errors manually dfData <- as.data.frame( read.csv("http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv", header=T)) # using direct calculations vY <- as.matrix(dfData[, -2])[, 5] # dependent variable mX Interpret Standard Error Of Regression Coefficient Brandon Foltz 70,074 views 32:03 Explanation of Regression Analysis Results - Duration: 6:14.
The confidence interval does not reflect the variability in the unknown parameter. statisticsfun 252,999 views 5:18 Standard Error - Duration: 7:05. Show more Language: English Content location: United States Restricted Mode: Off History Help Loading... navigate here When an explanatory variable is nearly a linear combination of other explanatory variables in the model, the affected estimates are unstable and have high standard errors.
This means that there is a 95% probability that the confidence interval will contain the true population mean. For example, the standard error of the estimated slope is $$\sqrt{\widehat{\textrm{Var}}(\hat{b})} = \sqrt{[\hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}]_{22}} = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}.$$ > num <- n * anova(mod)[[3]][2] > denom <- Therefore, the standard error of the estimate is There is a version of the formula for the standard error in terms of Pearson's correlation: where ρ is the population value of The standard deviation cannot be computed solely from sample attributes; it requires a knowledge of one or more population parameters.
So, the general form of a confidence interval is: point estimate + Z SE (point estimate) where Z is the value from the standard normal distribution for the selected confidence level The standard error is a measure of variability, not a measure of central tendency. In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms Tolerance is 1- R2 for the R2 that results from the regression of the explanatory variable on the other explanatory variables in the model.
Understandable StatisticsCharles Henry Brase, Corrinne Pellillo BraseList Price: $319.95Buy Used:$4.96Buy New: $24.99Texas Instruments Nspire CX CAS Graphing CalculatorList Price:$175.00Buy Used: $110.93Buy New:$159.99Approved for AP Statistics and Calculus Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the The system returned: (22) Invalid argument The remote host or network may be down. The degrees of freedom for the zeroed estimates are reported as 0.
Assume the data in Table 1 are the data from a population of five X, Y pairs. Kuala Lumpur (Malaysia) to Sumatra (Indonesia) by roro ferry Disproving Euler proposition by brute force in C Are Hagrid's parents dead? asked 3 years ago viewed 69472 times active 3 months ago Get the weekly newsletter! | 2018-09-20 22:04:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5449918508529663, "perplexity": 2293.549055179093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156622.36/warc/CC-MAIN-20180920214659-20180920235059-00194.warc.gz"} |
https://toph.co/p/sudoku-of-sheldon | # Practice on Toph
Participate in exhilarating programming contests, solve unique algorithm and data structure challenges and be a part of an awesome community.
# Sudoku of Sheldon
By MuntasirAK47 · Limits 1s, 512 MB
One day while playing the Sudoku game Sheldon got an emergency call from his mother, so now he has to leave. Sheldon almost solved his Sudoku but only just $4$ indices were left unsolved.
As you are the best friend of Sheldon that’s why he asked you to solve the rest part of the Sudoku game. You must have to solve the Sudoku in order to keep Sheldon happy otherwise he won’t help you next time. You just need to fill up $4$ cells so that each row and each column have distinct digits.
Remember that, Sheldon has just left the cells of one column or one row or one diagonal of the grid unsolved.
The grid consists of $4×4$ cells where it will have $4$ rows and $4$ columns.
## Input
There will be $4$ rows and $4$ columns where each row and each column will have distinct digits $d$. $(1 \leq d \leq 4)$.
The unsolved column or unsolved row or unsolved diagonal will be denoted by ‘$*$’.
## Output
Replace the ‘ $*$ ’ with any digit $d \hspace{.1cm} (1 \leq d \leq 4)$ and each row and each column must have distinct digits. You can print any answer as you want by maintaining the conditions properly.
## Samples
InputOutput
123*
341*
214*
432*
1234
3412
2143
4321
InputOutput
1234
3412
****
4321
1234
3412
2143
4321
Let me remind you again that the grid may remain diagonally unsolved.
### Statistics
94% Solution Ratio
mahdi.hasnatEarliest, 2w ago
mahdi.hasnatFastest, 0.0s
mahdi.hasnatLightest, 131 kB
abid1729Shortest, 444B | 2020-10-30 22:50:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5143733620643616, "perplexity": 1533.8751622281927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00480.warc.gz"} |
https://blender.stackexchange.com/questions/214536/any-way-to-get-a-multilayer-video | # Any way to get a multilayer video?
So I have an animation that I am trying to render in cycles. It takes about 20 mins per frame (my pc isn't amazing) and is 200 frames. I want to render the animation overnight, close the program and come back to it later.
Question: Is there any way to close a project after rendering, and be able to edit in the compositor after?
I know that you can do this with MultiLayer EXR files, but thats for pictures. How would I get a multilayer video?
Or is just any way to not have to composite the whole render in one go, and be able to close the program
• Well since a video can just be a series of images, I don't see why you can't. Render the video as a list of images in the multilayer EXR format, and when you are ready, open another blender project, and add an Input -> Image Sequence node into the compositor. I haven't actually tried this so I'm not sure if it works, so you might want to attempt it with just a couple frames first. – Prime_Aqasix Mar 10 at 6:41
• Short answer: No. ... and there is also no need for that. The highest bitdepth you'll get from a dnxhd/prores encoded clip is 10bit while you're getting 32 bit from an exr file. Also, to edit a movie clip, it needs to be decoded before anyway (splitting up the clip into editable frames requires caching, which slows down the process). Even if that would be technically possible, the file size for just a few seconds would be enormous for 10bit only. Anyway, using image sequences in comp is the way to go since decades for good reasons. – brockmann Mar 10 at 9:12
• Thank you, this helped a lot. – Relevred Mar 10 at 12:46
• One question: how would I export the composited video? it usually just edits the actual file, and I don't need to do anything but composite. However, Now that it is 200 exr files, how would I get a rendered video out of the compositor? – Relevred Mar 10 at 18:51
• Read about what encoding is: en.wikipedia.org/wiki/Video_codec You can use whatever you want: ffmpeg, Resolve or even the VSE to encode your sequence: How to convert image sequence to video using the VSE? – brockmann Mar 11 at 9:35
1. First render out your animation with the OpenExr Multilayer setting, into an image sequence.
1. Then load the image sequence into the compositor (can be on a separate blender instance), with the Input -> Image Sequence node
1. Do whatever composite you desire.
1. Then in the Output properties tab, set your frame start and end, then change the output format to ffmpeg video. Open up the encoding tab that appears, and change the container and encoding settings to what you want.
1. Render the animation (Ctrl + F12) (find the output video file in the folder specified under Output. It should be named something like 0001-0250.mkv or similar)
• Wait, would rendering it again just create another image sequence, or will it render the composite? – Relevred Mar 11 at 21:03
• If in the composite, you have the Use Nodes checkbox checked, it will render the composite. The best way to understand it is to just render a couple of frames and try it out. – Prime_Aqasix Mar 12 at 22:22 | 2021-06-13 21:06:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2880113124847412, "perplexity": 1603.9422379060088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00323.warc.gz"} |
https://www.physicsforums.com/threads/quantum-physics.71539/ | # Quantum Physics.
1. Apr 14, 2005
### The Bob
Quantum Physics.......
And you thought this thread was going to be serious.
Anyway ..... finally, today quantum physics was started in my college lectures. Most people in the room hated the sound of it but I was ready to go.
After the lesson my friend said 'I didn't realise quantum physics would be....' and then he stopped. So here is what the thread is about.
Finish the sentence above.
I don't know how long this thread will last but knowing my luck it will not be very long at all. Oh well.
Here is the start then:
I didn't realise quantum physics would be as fun as ripping my trousers off and dancing in the street.
2. Apr 14, 2005
### 2CentsWorth
I didn't realise quantum physics would be as fun as watching The Bob rip his trousers off and dance in the street.
3. Apr 14, 2005
### mapper
I didn't realise quantum physics would be this easy to understand.
4. Apr 14, 2005
### dextercioby
So whatwas the first course about?Wanna compare with mine.
Daniel.
5. Apr 14, 2005
### JasonRox
'I didn't realise quantum physics would be.... sooo easy like your girlfriend.
6. Apr 14, 2005
### The Bob
Well we have been doing Youngs Modulus and a basic level and Communications on a basic level also. Waves has been taught but nothing on quantum physics (or as they have written it, 'Quantum Behaviour'). Just been waiting to get down to the smaller things in life for a while.
7. Apr 14, 2005
### The Rev
I didn't realize quantum physics was so SASSY!
$$\phi$$
The Rev
8. Apr 14, 2005
### dextercioby
My first course on QM dealt with preHilbert spaces,metric,normed and complete spaces,all these only to introduce the concept of Hilbert space...
Daniel.
9. Apr 14, 2005
### Smurf
I didn't realise quantum physics would be-come such a well respected field that people would actually make threads about what they didn't realise it would be.
10. Apr 14, 2005
### Telos
I didn't realize quantum physics would be so much... minutiae.
11. Apr 14, 2005
### Gokul43201
Staff Emeritus
I didn't realize quantum physics would be so much...wait...if I have not yet realized, then my thought exists in a superposition of eigenthoughts...and by realizing, I have collapsed it into a definite thought...hence the act of realizing has actually affected the outcome of my thinking...darn, that's nothing new. :grumpy: <yawn>
12. Apr 14, 2005
### 1123581321
I didn't realise quantum physics would be as fun as listing to the song She Blinded Me with Science by Thomas Dolby. (That means i thought it was crazy)
Fibonacci
(in a good way)
13. Apr 14, 2005
### Staff: Mentor
Actually, I'm uncertain about what I didn't realize quantum physics would be.
14. Apr 15, 2005
### Edgardo
I didn't realize Quantum Physics is spelled Quantum $P\hbar ysics$
15. Apr 15, 2005
### ek
Laugh. Good one.
:rofl:
16. Apr 15, 2005
### The Bob
Strange twist on the idea, a funny variation on a theme. | 2018-08-21 06:32:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21507374942302704, "perplexity": 5702.83911835906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217970.87/warc/CC-MAIN-20180821053629-20180821073629-00455.warc.gz"} |
https://en.wikipedia.org/wiki/Plasmonic_nanoparticles | # Plasmonic nanoparticles
Plasmonic nanoparticles are particles whose electron density can couple with electromagnetic radiation of wavelengths that are far larger than the particle due to the nature of the dielectric-metal interface between the medium and the particles: unlike in a pure metal where there is a maximum limit on what size wavelength can be effectively coupled based on the material size.[1]
What differentiates these particles from normal surface plasmons is that plasmonic nanoparticles also exhibit interesting scattering, absorbance, and coupling properties based on their geometries and relative positions.[2] These unique properties have made them a focus of research in many applications including solar cells, spectroscopy, signal enhancement for imaging, and cancer treatment.
Plasmons are the oscillations of free electrons that are the consequence of the formation of a dipole in the material due to electromagnetic waves. The electrons migrate in the material to restore its initial state; however, the light waves oscillate, leading to a constant shift in the dipole that forces the electrons to oscillate at the same frequency as the light. This coupling only occurs when the frequency of the light is equal to or less than the plasma frequency and is greatest at the plasma frequency that is therefore called the resonant frequency. The scattering and absorbance cross-sections describe the intensity of a given frequency to be scattered or absorbed. Many fabrication processes exist for fabricating such nanoparticles, depending on the desired size and geometry.
The nanoparticles can form clusters to form plasmonic molecules and interact with each other to form cluster states. The symmetry of the nanoparticles and the distribution of the electrons within them can affect a type of bonding or antibonding character between the nanoparticles similarly to molecular orbitals. Since light couples with the electrons, polarized light can be used to control the distribution of the electrons and alter the mulliken term symbol for the irreducible representation. Changing the geometry of the nanoparticles can be used to manipulate the optical activity and properties of the system, but so can the polarized light by lowering the symmetry of the conductive electrons inside the particles and changing the dipole moment of the cluster. These clusters can be used to manipulate light on the nano scale.[3]
## Theory
The equations that describe the scattering and absorbance cross-sections for spherical nanoparticles are:
${\displaystyle {{\sigma }_{scatt}}={\frac {8\pi }{3}}{{k}^{4}}{{R}^{6}}{{\left|{\frac {{{\varepsilon }_{particle}}-{{\varepsilon }_{medium}}}{{{\varepsilon }_{particle}}+2{{\varepsilon }_{medium}}}}\right|}^{2}}}$
${\displaystyle {{\sigma }_{abs}}=4\pi k{{R}^{3}}Im\left|{\frac {{{\varepsilon }_{particle}}-{{\varepsilon }_{medium}}}{{{\varepsilon }_{particle}}+2{{\varepsilon }_{medium}}}}\right|}$
where k is the wavenumber of the electric field, R is the radius of the particle, ${\displaystyle {{\varepsilon }_{medium}}}$ is the relative permittivity of the dielectric medium and ${\displaystyle {{\varepsilon }_{particle}}}$ is the relative permittivity of the nanoparticle defined by
${\displaystyle {{\varepsilon }_{particle}}=1-{\frac {\omega _{p}^{2}}{{\omega }^{2}}}}$
also known as the Drude Model for free electrons where ${\displaystyle {{\omega }_{p}}}$ is the plasma frequency and ω is the frequency of the electromagnetic radiation. This equation is the result of solving the differential equation for a harmonic oscillator with a driving force proportional to the electric field that the particle is subjected to. For a more thorough derivation, see surface plasmon.
It logically follows that the resonance conditions for these equations is reached when the denominator is around zero such that
${\displaystyle {{\varepsilon }_{particle}}+2{{\varepsilon }_{medium}}\approx 0}$
When this condition is fulfilled the cross-sections are at their maximum.
These cross-sections are for single, spherical particles. The equations change when particles are non-spherical, or are coupled to 1 or more other nanoparticles, such as when their geometry changes. This principle is important for several applications.
## Applications
### Plasmonic Solar Cells
Main article: Plasmonic solar cell
Due to their ability to scatter light back into the photovoltaic structure and low absorption, plasmonic nanoparticles are under investigation as a method for increasing solar cell efficiency. Forcing more light to be absorbed by the dielectric increase efficiency.[4]
Plasmons can be excited by optical radiation and induce an electric current from hot electrons in materials fabricated from gold particles and light-sensitive molecules of porphin, of precise sizes and specific patterns. The wavelength to which the plasmon responds is a function of the size and spacing of the particles. The material is fabricated using ferroelectric nanolithography. Compared to conventional photoexcitation, the material produced three to 10 times the current.[5][6]
### Spectroscopy
In the past 5 years plasmonic nanoparticles have been explored as a method for high resolution spectroscopy. One group utilized 40 nm gold nanoparticles that had been functionalized such that they would bind specifically to epidermal growth factor receptors to determine the density of those receptors on a cell. This technique relies on the fact that the effective geometry of the particles change when they appear within one particle diameter (40 nm) of each other. Within that range, quantitative information on the EGFR density in the cell membrane can be retrieved based on the shift in resonant frequency of the plasmonic particles.[7]
### Cancer Treatment
Preliminary research indicates that the absorption of gold nanorods functionalized with epidermal growth factor is enough to amplify the effects of low power laser light such that it can be used for targeted radiation treatments.[8] | 2016-10-22 22:15:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7468163371086121, "perplexity": 529.5679668513429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719045.47/warc/CC-MAIN-20161020183839-00422-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://istopdeath.com/find-the-slope-and-y-intercept-07-07/ | # Find the Slope and Y-Intercept (0,7) , (0,7)
,
The two points provided are the same. There is an infinite number of lines that run through that point, so there is an infinite number of slopes and y-intercepts.
Infinite number of slopes and y-intercepts
Find the Slope and Y-Intercept (0,7) , (0,7) | 2022-09-24 16:14:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227878212928772, "perplexity": 285.7637908202783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00166.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-1-equations-and-inequalities-exercise-set-1-1-page-105/87 | ## College Algebra (6th Edition)
If $6$ is substituted for $x$ in the equation, is the resulting statement true or false? Use the order of operations (PEMDAS) to solve this problem. $2(x-3)-17=13-3(x+2)$ $2(6-3)-17=13-3(6+2)$ $2(3)-17=13-3(8)$ $6-17=13-24$ $-11=-11$ true | 2019-11-17 00:32:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931474685668945, "perplexity": 379.5457309955789}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00162.warc.gz"} |
http://doc.gnu-darwin.org/Catalogue/entries/circle.html | ### by Graham Williams
##### CTAN Edition
circle No caption. Provides circles in math mode that can be used for the nextstep operator of temporal logic, in conjunction with \Box and \Diamond (latexsym) or \square and \lozenge (amssymb). LaTeX circles \circ and \bigcirc are not of the right size. The circles are taken from the font lcircle10. The package contains some hacks to approximate the right size and this solution is definitely not sufficient to give a high quality output. License: unknown Catalogued: 1998/10/13 | 2022-12-01 21:17:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231452107429504, "perplexity": 4500.073091020404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00440.warc.gz"} |
https://www.cut-the-knot.org/m/Algebra/CyclicInequalityInThreeVariables5.shtml | # A Cyclic Inequality in Three Variables V
### Solution
We have $(c-a)(c-b)(b-a)\ge 0\,$ so that
$ab^2+bc^2+ca^2\ge a^2b+b^2c+c^2a.$
Hence, suffice it to prove that
$\displaystyle\frac{ab^2+bc^2+ca^2+a^2b+b^2c+c^2a}{abc}\ge\frac{2\sqrt{3(a^2+b^2+c^2)}(a+b+c)}{ab+bc+ca}.$
This is equivalent to
$\displaystyle (a+b+c)(ab+bc+ca)\ge 3abc+\frac{2abc\sqrt{3(a^2+b^2+c^2)}(a+b+c)}{ab+bc+ca}.$
Assume, WLOG, $a+b+c=3.\,$ Then $ab+bc+ca=3(1-t^2),\,$ $0\le t\lt 1\,$ and $\max(abc)=(1-t)^2(1+2t).\,$ Suffice it to show that
$\displaystyle t^3+2t^2+2t+1\ge (1+2t)\sqrt{1+2t^2}$
which is equivalent to
$\displaystyle \frac{t^4(t+2)^2}{(1+2t)^2}+\frac{2t^2(t+2)}{1+2t}\ge 2t^2,$
i.e.,
$\displaystyle \frac{t^2(t+2)^2}{(1+2t)^2}+\frac{2(t+2)}{1+2t}\ge 2.$
But, for $t\in [0,1),\,$ $\displaystyle\frac{2(t+2)}{1+2t}\ge 2\,$ and, clearly, $\displaystyle\frac{t^2(t+2)^2}{(1+2t)^2}\ge 0.$
### Acknowledgment
Leo Giugiuc has kindly communicated to me the problem above, along with the solution of his. The problem has been previously posted by Cooper Carpenter at the mathematical inequalities facebook group. The problem is credited to Le Viet Hung.
### Cyclic inequalities in three variables
Copyright © 1996-2018 Alexander Bogomolny
65271187 | 2019-08-24 13:39:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20470212399959564, "perplexity": 1626.9585314951362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00259.warc.gz"} |
https://matholympiad.org.bd/forum/viewtopic.php?f=35&t=3965 | ## BDMO Forum Mafia #2
Let's have some fun! If you love to discuss random topics, this is your place!
Posts: 90
Joined: Fri Dec 28, 2012 8:35 pm
### BDMO Forum Mafia #2
BDMO Forum Mafia #2 - Revenge of the Bashers
Once upon a time, the innocent people of BDMO Forum did geometry using only synthetic techniques. Yes, sometimes they'd do some dushtu stuff and add in a page of trig or long-winded length chasing, but it was harmless fun. The great priests of the land, those blessed with the gift of projective, maintained law and order throughout the land. The children diligently carried out their angle chasing, hoping to one day match the power of one of the Legends of AoPS.
But within this prosperity the seed of something horrible was growing. Then, one day it all broke loose. Complex numbers and barycentric coordinates reigned in the once hallowed sanctuaries of Geometry, and rumors where heard of Cartesian monsters lurking in the villages. The people of the forum realized that this was the work of one of them, and thus began the most important mission in the lifetime of the forum: the hunt for the bashers.
Guide to mafia:
General Mafia Rules: (These are to prevent abuse on the player's part. Just try to play fair and you won't have to be distracted by these rules)
Once signups are over, I will send a PM to each player telling them their role in the game. This PM will have all you need to know, don't worry. And don't reveal the role information to anyone outside the game!
In this game, the Synthetic Solvers are the civilians and the Bashers are the mafia. Every day phase, the forum votes to decide which person is lynched - their copies are burned and their compasses are thrown into the sea. And every night, the bashers creep around. When they find a victim, they show him the horrendous calculations and overpower his mind - making it unfit for holding any geometric thought for years.
The Bashers want to overpower the synthetic solvers and establish a reign of bash on the forums. The synthetic solvers must lynch all the Bashers before it is too late.
Since the roles are changed a lot in this game, we won't be having the day lynch by a majority. Thus day phase will NOT end early. At the end of the day, the person with the most votes is lynched.
The possible roles you may receive (more will be added if we have more players):
$\text{Figure Scout}$ - Synthetic Solver
Oh look, that arbitrary cevian forms a cyclic quad with the mixtillinear and excircle touchpoints!
Every night, you may target a person. At the end of the night phase, learn if that person is a Basher or not.
$\text{Angle Chaser}$ - Synthetic Solver
So we have $\angle PDC = \angle ADC = \angle ACB$, and... uh.... uhhhh..... I'm stuck
When the voting totals are counted at the end of day phase, you have $-1$ vote total on you.
$\text{Projective Priest}$ - Synthetic Solver
Clearly $(A,G;D,H) = (A,C;X,N) = -1. Then by Desargues and Brianchon we are done Every night, you may target a person. At the end of the night phase, you learn who targeted that person and whom they targeted.$\text{Geo Hater}$- Synthetic Solver It takes me 4 hours to do a G2$\dots$You are immune to all targetted actions during the night phase, except for the basher's nightkill and BashBaba's role (which may negate this ability for others).$\text{Gonju}$- Synthetic Solver Let$\mathcal{K}$be the circum-conic of$\triangle ABC$passing through its Brocard points$\Omega_1$and$\Omega_2$and let$X$be the 4th intersection of$\mathcal{K}$and the circumcircle$\odot(ABC).$During the day phase, your vote counts twice/has double power. (No one understands your awesome solutions fully, but they respect your words nonetheless)$\text{Telv Cohl}$- Synthetic Solver [Generalization of Mixtilinear Incircle] Every night, you may target a person. That person may not die during that night phase (they are soothed by awesome geometry solutions).$\text{darij grinberg}$- Synthetic Solver Lemma 1: In triangle$\triangle ABC$, if$AB=AC$then$\angle ABC = \angle ACB$In any phase, you may target a player and show them a synthetic solution which starts from Euclid's axioms. Thus kills BOTH your target and yourself at the end of the phase.$\text{Determinant Detector}$- Basher I once proved the existence of the Nine point circle... using Cartesian Coordinates! Every night, you may target a person. At the end of the night phase, learn their role.$\text{BashBaba}$- Basher So what if it's a G6? Conway Notation and strong EFFT kills it. Every night, you may target person. Their role does not work for this night phase only. The game will be played by discussing in this thread. During the day phase, you may post in this thread and talk about whatever you want, and try to hunt out who may be Mafia. Please don't discuss outside the thread, as then people might be left out of the conversation. Once you're ready to vote on someone, you can do so in the following format: Code: Select all [b]VOTE: (username)[/b] Please use the proper forum username so that people are not confused. I will try to keep up an official vote list updated regularly. If there is a tie at the end of the day phase, the votes are resolved in this order (The person who was voted on by the earliest role in this list is lynched): Bashbaba > Determinant Detector > darij grinberg > Telv Cohl > Gonju > Geo Hater > Projective Priest > Angle Chaser > Figure Scout 48 hours after the beginning of the day phase, it will end and the person with most votes will be lynched. That will begin the night phase. During the night phase, you may NOT post in this thread. Mafia, and people with activated roles, may send me their actions through PM. Role order: BashBaba > Geo Hater > Telv Cohl > Darij Grinberg > Nightkill > Determinant Detector > Figure Scoper > Projective Priest Ask me if you're confused about anything! Some final things to note: Sign up by posting in this thread! The game will begin once we have 9 people signed up on a first come first served basis, so sign up soon! *If you are still interested after 9 people have signed up, post here/PM me and I'll keep you as a substitute. If initial interest is very high, we may have a game with more than 9 players. Also, PM me if you're interested in being a spectator for the game (then you'll get to know who has which role). ahmedittihad Posts: 181 Joined: Mon Mar 28, 2016 6:21 pm ### Re: BDMO Forum Mafia #2 Zawadx wrote: This time, EVERYONE will get a special power role (no vanillas). Hopefully this will result in some fun for everyone! . "Everyone's super is another way to say no one is." BTW, I'm playing. Last edited by ahmedittihad on Tue Apr 11, 2017 10:26 pm, edited 1 time in total. Frankly, my dear, I don't give a damn. Raiyan Jamil Posts: 138 Joined: Fri Mar 29, 2013 3:49 pm ### Re: BDMO Forum Mafia #2 Count me in A smile is the best way to get through a tough situation, even if it's a fake smile. dshasan Posts: 66 Joined: Fri Aug 14, 2015 6:32 pm Location: Dhaka,Bangladesh ### Re: BDMO Forum Mafia #2 ok. I'm in The study of mathematics, like the Nile, begins in minuteness but ends in magnificence. - Charles Caleb Colton Epshita32 Posts: 37 Joined: Mon Aug 24, 2015 12:34 am ### Re: BDMO Forum Mafia #2 Oka, I am in too. Ananya Promi Posts: 36 Joined: Sun Jan 10, 2016 4:07 pm Location: Naogaon, Bangladesh ### Re: BDMO Forum Mafia #2 I am also in TashkiManda Posts: 4 Joined: Sat Dec 31, 2016 7:26 pm ### Re: BDMO Forum Mafia #2 1. ahmedittihad 2. Raiyan Jamil 3. dshasan 4. Epshita32 5. Ananya Promi 6. TashkiManda Next user copy paste this and add your name at the end. nahin munkar Posts: 81 Joined: Mon Aug 17, 2015 6:51 pm Location: banasree,dhaka ### Re: BDMO Forum Mafia #2 1. ahmedittihad 2. Raiyan Jamil 3. dshasan 4. Epshita32 5. Ananya Promi 6. TashkiManda 7. nahinmunkar # Mathematicians stand on each other's shoulders. ~ Carl Friedrich Gauss Zawadx Posts: 90 Joined: Fri Dec 28, 2012 8:35 pm ### Re: BDMO Forum Mafia #2 BDMO Forum Mafia #2 - Revenge of the Bashers Once upon a time, the innocent people of BDMO Forum did geometry using only synthetic techniques. Yes, sometimes they'd do some dushtu stuff and add in a page of trig or long-winded length chasing, but it was harmless fun. The great priests of the land, those blessed with the gift of projective, maintained law and order throughout the land. The children diligently carried out their angle chasing, hoping to one day match the power of one of the Legends of AoPS. But within this prosperity the seed of something horrible was growing. Then, one day it all broke loose. Complex numbers and barycentric coordinates reigned in the once hallowed sanctuaries of Geometry, and rumors where heard of Cartesian monsters lurking in the villages. The people of the forum realized that this was the work of one of them, and thus began the most important mission in the lifetime of the forum: the hunt for the bashers. In this game, the Synthetic Solvers are the civilians and the Bashers are the mafia. Every day phase, the forum votes to decide which person is lynched - their copies are burned and their compasses are thrown into the sea. And every night, the bashers creep around. When they find a victim, they show him the horrendous calculations and overpower his mind - making it unfit for holding any geometric thought for years. The Bashers want to overpower the synthetic solvers and establish a reign of bash on the forums. The synthetic solvers must lynch all the Bashers before it is too late. Since the roles are changed a lot in this game, we won't be having the day lynch by a majority. Thus day phase will NOT end early. At the end of the day, the person with the most votes is lynched. The possible roles you may receive (more will be added if we have more players):$\text{Figure Scout}$- Synthetic Solver Oh look, that arbitrary cevian forms a cyclic quad with the mixtillinear and excircle touchpoints! Every night, you may target a person. At the end of the night phase, learn if that person is a Basher or not.$\text{Angle Chaser}$- Synthetic Solver So we have$\angle PDC = \angle ADC = \angle ACB$, and... uh.... uhhhh..... I'm stuck When the voting totals are counted at the end of day phase, you have$-1$vote total on you.$\text{Projective Priest}$- Synthetic Solver Clearly$(A,G;D,H) = (A,C;X,N) = -1. Then by Desargues and Brianchon we are done
Every night, you may target a person. At the end of the night phase, you learn who targeted that person and whom they targeted.
$\text{Geo Hater}$ - Synthetic Solver
It takes me 4 hours to do a G2$\dots$
You are immune to all targetted actions during the night phase, except for the basher's nightkill.
$\text{Gonju}$ - Synthetic Solver
Let $\mathcal{K}$ be the circum-conic of $\triangle ABC$ passing through its Brocard points $\Omega_1$ and $\Omega_2$ and let $X$ be the 4th intersection of $\mathcal{K}$ and the circumcircle $\odot(ABC).$
During the day phase, your vote counts twice/has double power. (No one understands your awesome solutions fully, but they respect your words nonetheless)
$\text{Telv Cohl}$ - Synthetic Solver
[Generalization of Mixtilinear Incircle]
Every night, you may target a person. That person may not die during that night phase (they are soothed by awesome geometry solutions).
$\text{darij grinberg}$ - Synthetic Solver
Lemma 1: In triangle $\triangle ABC$, if $AB=AC$ then $\angle ABC = \angle ACB$
In any phase, you may target a player and show them a synthetic solution which starts from Euclid's axioms. Thus kills BOTH your target and yourself at the end of the phase.
$\text{Exercise Solver}$ - Basher
I once proved the existence of the Nine point circle... using Cartesian Coordinates!
Every night, you may target a person. At the end of the night phase, learn their role.
$\text{BashBaba}$ - Basher
So what if it's a G6? Conway Notation and strong EFFT kills it.
Every night, you may target person. Their role does not work for this night phase only.
Feel free to ask questions! Hopefully this will get us more signups
Epshita32
Posts: 37
Joined: Mon Aug 24, 2015 12:34 am
### Re: BDMO Forum Mafia #2
Can you please elaborate the role of the projective preist? | 2020-10-21 15:41:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4279899001121521, "perplexity": 3973.2175997407626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00120.warc.gz"} |
https://cs.stackexchange.com/questions/94031/how-to-get-the-upper-lower-and-average-bound-of-a-given-algorithm | # How to get the upper, lower and average bound of a given algorithm?
How to get upper, lower, average bound of given algorithm? What should be the first step I should do? I search on the internet and only give me the definition of those 3. For example if take the algorithm of nlog(n)+2 then how do we find the lower and upper bound of this? What would be the first step?
• It okay.Sorry my English writing skill is not much good.I have made some changes to the questions Jul 8 '18 at 15:40
• Apply the changes.Okay Jul 8 '18 at 15:53
When we talk of best case, worst case, and average case, it is in the context of algorithms whose running time depends on the input and not just on its length. If the algorithm always runs in time $n\log n + 2$, then the best case, worst case, and average case are all the same.
For a non-trivial example, consider quicksort. Quicksort always runs in time $O(n^2)$, and there is a sequence of arrays, one of each length, on which quicksort runs in time $\Theta(n^2)$. Therefore the worst case running time of quicksort is $\Theta(n^2)$. The best case running time varies according to the implementation – it can be $\Theta(n\log n)$ or $\Theta(n)$.
What about average case? Whenever we talk about average case, we need to introduce a distribution, according to which the average is taken. For sorting, the standard distribution is a random permutation of $1,\ldots,n$. The average case running time of quicksort, with respect to this distribution, is $\Theta(n\log n)$. | 2022-01-27 04:25:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184559941291809, "perplexity": 311.59302849691517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00228.warc.gz"} |
http://bootmath.com/proving-that-a-hlder-space-is-a-banach-space.html | Proving that a Hölder space is a Banach space
I am trying to show that the Hölder space $C^{k,\gamma}(\bar{U})$ is a Banach space. To do this, I successfully proved that the mapping $\| \quad \| : C^{k,\gamma}(\bar{U}) \to [0,\infty)$ is a norm, by proving its properties.
But how do I show that the sequence $\{u_k\}_{k=1}^\infty \subset C^{k,\gamma}(\bar{U})$ converges to $u \in C^{k,\gamma}(\bar{U})$, that is, how do I show that $$\lim_{k \to \infty}\|u_k-u\|=0,$$ which would mean the normed linear space is complete, and hence a Banach space?
Here are the following taken from PDE Evans, 2nd edition, page 255:
Definition. The Hölder space $$C^{k,\gamma}(\bar{U})$$ consists of all functions $u \in C^k(\bar{U})$ for which the norm $$\|u\|_{C^{k,y}(\bar{U})}:= \sum_{|\alpha|\le k} \|D^\alpha u \|_{C(\bar{U})}+\sum_{|\alpha|=k} [D^\alpha u]_{C^{0,\gamma}(\bar{U})}$$ is finite.
Also from page 254,
Definitions. (i) If $u : U \to \mathbb{R}$ is bounded and continuous, we write $$\|u\|_{C(\bar{U})}:=\sup_{x\in U}|u(x)|.$$
(ii) The $\gamma$th-Hölder seminorm of $u : U \to \mathbb{R}$ is $$[u]_{C^{0,\gamma}(\bar{U})}:=\sup_{\substack{x,y\in U \\ x \neq y}} \left\{\frac{|u(x)-u(y)|}{|x-y|^\gamma} \right\},$$ and the $\gamma$th-Hölder norm is $$\|u\|_{C^{0,\gamma}(\bar{U})}:=\|u\|_{C(\bar{U})}+[u]_{C^{0,\gamma}(\bar{U})}.$$
This is all I got so far:
\begin{align}
\|u_k-u\|_{C^{k,\gamma}(\bar{U})}&=\sum_{|\alpha|\le k} \|D^\alpha u \|_{C(\bar{U})}+\sum_{|\alpha|=k} [D^\alpha u]_{C^{0,\gamma}(\bar{U})} \\
&= \sum_{|\alpha|\le k} \sup_{x\in U} |u_k(x)-u(x)|+ \sum_{|\alpha|=k} \sup_{\substack{x,y\in U \\ x \neq y}} \left\{\frac{|[u_k(x)-u(x)]-[u_k(y)-u(y)]|}{|x-y|^\gamma} \right\}.
\end{align}
Now, where can I go from here, to show that $\lim_{k \to \infty}\|u_k-u\|_{C^{k,\gamma}(\bar{U})}=0$? The sequence is Cauchy, and I have to use that fact somehow.
Solutions Collecting From Web of "Proving that a Hölder space is a Banach space"
We have to show the following: given a Cauchy sequence $(u_n)_{n\in \mathbb N}$ in $C^{k,\gamma}(\overline U),$ there is a $u \in C^{k,\gamma}(\overline U)$ such that $\lim_{n\rightarrow\infty}u_n = u$ in $C^{k,\gamma}(\overline U).$
Note that the Hölder norm is the “sum” of the $C^k$ norm (i.e. sup-norm up to the $k$-th derivatives) and the Hölder condition with parameter $\gamma \in (0,1)$ for the $k$-th derivatives. From the $C^k$ part of the Hölder norm and the completeness of $C^k(\overline U),$ we get a $u \in C^k(\overline U)$ with $\lim_{n\rightarrow\infty}u_n = u$ in $C^k(\overline U).$ Now let’s look at the Hölder condition. Let’s fix a multi-index $\alpha$ with $|\alpha| = k$ and write
$$v := D^\alpha u\qquad and \qquad v_n := D^\alpha u_n$$
for short. Then we have for fixed $x\neq y$
\begin{align} \frac{|v(x) – v(y)|}{|x-y|^\gamma} & = \frac{|v(x) – v_m(x) + v_m(x) – v_m(y) + v_m(y) – v(y)|}{|x-y|^\gamma} \\ & \leq \frac{|v(x) – v_m(x)|}{|x-y|^\gamma} + \frac{|v_m(x) – v_m(y)|}{|x-y|^\gamma} + \frac{|v_m(y) – v(y)|}{|x-y|^\gamma} \end{align}
for arbitrary $m \in \mathbb N.$ Here, the first and third summands on the r.h.s. can be made arbitrarily small by choosing $m$ large enough, since $v_m \rightarrow v$ uniformly, and the second summand on the r.h.s. is bounded independent of $m$ and $x\neq y$ since the sequence $u_n$ is Cauchy, hence bounded, in the Hölder norm. From all this we get a constant $M_\alpha$ such that
$$\frac{|v(x) – v(y)|}{|x-y|^\gamma} \leq M_\alpha \qquad independent\ of\ x\neq y.$$
This shows that we have
$$u\in C^{k,\gamma}(\overline U).$$
So far so good. Finally, we have to show that $u_n \rightarrow u$ in $C^{k,\gamma}(\overline U).$ From the construction of $u,$ we already have $u_n \rightarrow u$ in $C^k(\overline U),$ so we only have to show convergence in the Hölder seminorms of the $k$-th derivatives. We use the notation $\alpha,v,v_n$ from above. We have
\begin{align} \frac{|(v-v_m)(x) – (v-v_m)(y)|}{|x-y|^\gamma} & = \frac{|v(x) – v_m(x) – v(y) + v_m(y)|}{|x-y|^\gamma} \\ & = \frac{|(\lim_{k\rightarrow\infty}v_k(x)) – v_m(x) – (\lim_{k\rightarrow\infty}v_k(y)) + v_m(y)|}{|x-y|^\gamma} \\ & = \lim_{k\rightarrow\infty}\frac{|v_k(x) – v_m(x) – v_k(y) + v_m(y)|}{|x-y|^\gamma} \\ & = \lim_{k\rightarrow\infty}\frac{|(v_k – v_m)(x) – (v_k – v_m)(y)|}{|x-y|^\gamma}. \end{align}
Here, the r.h.s. can be made arbitrarily small independent of $x\neq y$ by choosing $m$ large enough, since the sequence $u_n$ is Cauchy in the Hölder norm. So we find $v_n \rightarrow v$ in the Hölder seminorm. Since there are only finitely many multi-indices $\alpha$ (or equivalently, $k$-th derivatives) to consider, we can conclude
$$u_n \rightarrow u\qquad in\ C^{k,\gamma}(\overline U),$$
as desired.
Since your comment to my first answer is tricky to read, let me restate it. The follow-up question is: how does
$$\frac{|v(x)-v(y)|}{|x-y|^\gamma} \le M_\alpha < \infty \qquad independent\ of\ x\neq y \qquad\qquad\qquad(*)$$
for every multi-index $\alpha$ of order $k$ imply
$$u \in C^{k,\gamma}(\bar{U})?$$
The answer is as follows: in order to show $u \in C^{k,\gamma}(\bar{U}),$ we have to show that all the ingredients of the Hölder norm of $u$ can be calculated, and the value of the Hölder norm thus obtained is finite.
By construction of $u,$ we have
$$\sum_{|\beta|\leq k} \|D^\beta u\|_{C(\overline U)} < \infty.$$
Moreover, (*) above implies that for a fixed multi-index $\alpha$ of order $k,$ we have
$$[D^\alpha u]_{C^{,\gamma}(\overline U)} \leq M_\alpha < \infty.$$
From this we get
$$\sum_{|\alpha| = k}[D^\alpha u]_{C^{,\gamma}(\overline U)} \leq \sum_{|\alpha| = k} M_\alpha < \infty.$$
So we have all in all
$$\|u\|_{C^{k,\gamma}(\bar{U})} = \sum_{|\beta|\leq k} \|D^\beta u\|_{C(\overline U)} + \sum_{|\alpha| = k}[D^\alpha u]_{C^{,\gamma}(\overline U)} < \infty$$
and thus
$$u \in C^{k,\gamma}(\bar{U}),$$
as desired. | 2018-06-18 11:47:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985054731369019, "perplexity": 265.18280624124543}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00622.warc.gz"} |
http://spamdestructor.com/error-propagation/propagate-error-division.php | Home > Error Propagation > Propagate Error Division
# Propagate Error Division
## Contents
Let's say we measure the radius of a very small object. Christopher 166 views 5:46 Uncertainty Calculations - Division - Duration: 5:07. The system returned: (22) Invalid argument The remote host or network may be down. Since f0 is a constant it does not contribute to the error on f. More about the author
## Error Propagation Calculator
In that case the error in the result is the difference in the errors. In both cases, the variance is a simple function of the mean.[9] Therefore, the variance has to be considered in a principal value sense if p − μ {\displaystyle p-\mu } This method of combining the error terms is called "summing in quadrature." 3.4 AN EXAMPLE OF ERROR PROPAGATION ANALYSIS The physical laws one encounters in elementary physics courses are expressed as
which may always be algebraically rearranged to: [3-7] ΔR Δx Δy Δz —— = {C } —— + {C } —— + {C } —— ... Practically speaking, covariance terms should be included in the computation only if they have been estimated from sufficient data. p.37. Error Propagation Chemistry Working...
Article type topic Tags Upper Division Vet4 © Copyright 2016 Chemistry LibreTexts Powered by MindTouch ERROR ANALYSIS: 1) How errors add: Independent and correlated errors affect the resultant error in Error Propagation Inverse When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly A consequence of the product rule is this: Power rule. https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm The absolute fractional determinate error is (0.0186)Q = (0.0186)(0.340) = 0.006324.
We will treat each case separately: Addition of measured quantities If you have measured values for the quantities X, Y, and Z, with uncertainties dX, dY, and dZ, and your final Error Propagation Average Simanek. Skip navigation UploadSign inSearch Loading... the relative determinate error in the square root of Q is one half the relative determinate error in Q. 3.3 PROPAGATION OF INDETERMINATE ERRORS. CORRECTION NEEDED HERE(see lect.
## Error Propagation Inverse
Robbie Berg 22,296 views 16:31 Error propagation - Duration: 10:29.
It is also small compared to (ΔA)B and A(ΔB). Error Propagation Calculator You can easily work out the case where the result is calculated from the difference of two quantities. Error Propagation Square Root If the measurements agree within the limits of error, the law is said to have been verified by the experiment.
Introduction Every measurement has an air of uncertainty about it, and not all uncertainties are equal. http://spamdestructor.com/error-propagation/propagation-of-error-division.php PhysicsOnTheBrain 1,280 views 13:30 11.1 Determine the uncertainties in results [SL IB Chemistry] - Duration: 8:30. When mathematical operations are combined, the rules may be successively applied to each operation. In the first step - squaring - two unique terms appear on the right hand side of the equation: square terms and cross terms. Error Propagation Physics
• Therefore the area is 1.002 in2± 0.001in.2.
• Then the error in any result R, calculated by any combination of mathematical operations from data values x, y, z, etc.
• Square or cube of a measurement : The relative error can be calculated from where a is a constant.
Loading... in each term are extremely important because they, along with the sizes of the errors, determine how much each error affects the result. Retrieved 2012-03-01. click site Carl Kaiser 31,907 views 7:32 IB Physics: Uncertainties and Errors - Duration: 18:37.
Joint Committee for Guides in Metrology (2011). Error Propagation Definition The absolute error in Q is then 0.04148. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB) [3-3] or : ΔR = (ΔA)B + A(ΔB) + (ΔA)(ΔB)
## A + ΔA A (A + ΔA) B A (B + ΔB) —————— - — ———————— — - — ———————— ΔR B + ΔB B (B + ΔB) B B (B
The trick lies in the application of the general principle implicit in all of the previous discussion, and specifically used earlier in this chapter to establish the rules for addition and Turn off ads with YouTube Red. We know the value of uncertainty for∆r/r to be 5%, or 0.05. Error Propagation Excel If you like us, please shareon social media or tell your professor!
Summarizing: Sum and difference rule. A consequence of the product rule is this: Power rule. Journal of Sound and Vibrations. 332 (11): 2750–2776. navigate to this website The errors are said to be independent if the error in each one is not related in any way to the others.
Guidance on when this is acceptable practice is given below: If the measurements of a and b are independent, the associated covariance term is zero. Close Yeah, keep it Undo Close This video is unavailable. Note Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations. In effect, the sum of the cross terms should approach zero, especially as $$N$$ increases.
Sign in Share More Report Need to report the video? XLClasses 4,350 views 11:38 Error Calculation Example - Duration: 7:24. which we have indicated, is also the fractional error in g. Martin John Madsen 1,190 views 2:57 Basic Rules of addition and subtraction of Errors(Part-1), IIT-JEE physics classes - Duration: 5:02.
The problem might state that there is a 5% uncertainty when measuring this radius. Raising to a power was a special case of multiplication. doi:10.1287/mnsc.21.11.1338. Loading...
Retrieved 2013-01-18. ^ a b Harris, Daniel C. (2003), Quantitative chemical analysis (6th ed.), Macmillan, p.56, ISBN0-7167-4464-3 ^ "Error Propagation tutorial" (PDF). Richard Thornley 33,949 views 8:30 Calculating the Propagation of Uncertainty - Duration: 12:32. etc. | 2018-10-17 17:14:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.604244589805603, "perplexity": 1796.421288396381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00097.warc.gz"} |
https://www.mathwarehouse.com/algebra/planes/systems/three-variable-equations.php | # Solutions: Systems of 3 variable Equations
#### What is a linear equation with 3 variables?
Diagram 1 is the graph of the plane $$2x + 3y + z = 6$$ .
The red triangle is the portion of the plane when x, y, and z values are all positive. This plane actually continues off in the negative direction. All that is pictured is the part of the plane that is intersected by the positive axes (the negative axes have dashed lines).
Diagram 1
#### What is a system of 3 variables equations?
Just like a system of linear equations with 2 variables is more than 1 line, a system of 3 variable equations is just more than 1 plane.
### No Solutions, 1 Solution or Infinite Solutions
Like systems of linear equations, the solution of a system of planes can be no solution, one solution or infinite solutions.
### No Solution
#### Case I
Below is a picture of three planes that have no solution. There is no single point at which all three planes intersect, therefore this system has no solution.
#### Case II
The other common example of systems of three variables equations that have no solution is pictured below. In the case below, each plane intersects the other two planes. However, there is no single point at which all three planes meet. Therefore, the system of 3 variable equations below has no solution.
### One Solutionof three variable systems
If the three planes intersect as pictured below then the three variable system has 1 point in common, and a single solution represented by the black point below.
### Infinite Solutionsof three variable systems
If the three planes intersect as pictured below then the three variable system has a line of intersection and therefore an infinite number of solutions. | 2021-10-26 08:20:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3593047857284546, "perplexity": 350.89464060404197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00387.warc.gz"} |
https://www.gamedev.net/forums/topic/603553-substrings-in-java/ | # Substrings in Java
This topic is 2451 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Alright, so I have a bit of code that tries to autocomplete a field for the user as they type and the code looks a little something like this...
JTextField ingredientName; String oldText; ArrayList<String> autocompleteList; ... int caretPosition = ingredientName.getCaretPosition(); String testText = ingredientName.getText().substring(0, caretPosition); if (!oldText.equals(testText)) { Boolean done = false; for (int i = 0; i < autocompleteList.size() && !done; i++) { if (testText.equals(autocompleteList.get(i).substring(0, caretPosition))) { //<---Problem ingredientName.setText(autocompleteList.get(i)); ingredientName.setCaretPosition(caretPosition); done = true; } } } oldText = testText;
I realize the code's probably a mess as it is, but it's a rough draft. The issue is that whenever I get to the 5th letter of a compatible entry, I get an array index out of bounds on the problem line. So say I've typed "Tort" and there's an autocomplete entry for "Tortilla", as soon as I hit "i" I get the error. This code is run on caret update for ingredientName, and yes, it's inside a functioning invokeLater so the issue's not there. The error is also happening on more than one autocomplete entry, so that's not it. It's reading the string fine, the replacement in the JTextField works beautifully. It's just bugging when it hits the 5th letter, and I can't understand why.
Heading to bed since I've been staring at this for a few hours, will take a look at any answers in the morning. Thanks in advance!
##### Share on other sites
What is in autoCompleteList? If there's an entry in there that appears before "Tortilla" and which has only four characters, I'd expect an exception.
If you can provide a minimal but complete example for others to try it will be easier to debug.
##### Share on other sites
Here's the list:
Beef
Cheddar Cheese
Lettuce
Ranch Dressing
Salsa
Tomato
Tortillas
So each of these entries is loaded into autocompleteList from an external text file.
For example, as "Tortillas" is typed, here's the text inside the JTextField along with the caret position and the eventual error.
T|omato
To|mato
Tor|tillas
Tort|illas
Torti|illas
Exception in thread "AWT-EventQueue-0" java.lang.StringIndexOutOfBoundsException: String index out of range: 5
##### Share on other sites
Here's the list:
Beef
Cheddar Cheese
Lettuce
Ranch Dressing
Salsa
Tomato
Tortillas
So each of these entries is loaded into autocompleteList from an external text file.
For example, as "Tortillas" is typed, here's the text inside the JTextField along with the caret position and the eventual error.
T|omato
To|mato
Tor|tillas
Tort|illas
Torti|illas
Exception in thread "AWT-EventQueue-0" java.lang.StringIndexOutOfBoundsException: String index out of range: 5
You are making the comparison with all strings in the list, Beef is 4 characters
change:
if (testText.equals(autocompleteList.get(i).substring(0, caretPosition)))
to
if (testText.equals(autocompleteList.get(i).substring(0, Math.min(caretPosition,autocompleteList.get(i).length())))
and your problem should go away.
##### Share on other sites
You are making the comparison with all strings in the list, Beef is 4 characters
change:
if (testText.equals(autocompleteList.get(i).substring(0, caretPosition)))
to
if (testText.equals(autocompleteList.get(i).substring(0, Math.min(caretPosition,autocompleteList.get(i).length())))
and your problem should go away.
Thank you! I just figured it out on my way home and had to sigh at how simple a fix it was. Rep for the help and thanks for your time. | 2018-02-22 05:35:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3816744089126587, "perplexity": 2875.5814032631415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00182.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=510246 | ## Symmetric traceless tensor
Hi everybody,
With the use of two vectors $v^i$ and $v^i$, and $\delta^{ij}$, how can I construct two linearly independent rank 2 basis tensors which are symmetric and traceless? (i,j running over 1-2)
Any symmetric traceless 2x2 matrix can be written as a linear combination of diag(1,-1) and offdiag(1,1), but how can I construct a basis with the given set of vectors and tensors above?
Thanks for taking the time to read and I hope I made myself clear enough.
Cheers
PhysOrg.com mathematics news on PhysOrg.com >> Pendulum swings back on 350-year-old mathematical mystery>> Bayesian statistics theorem holds its own - but use with caution>> Math technique de-clutters cancer-cell data, revealing tumor evolution, treatment leads
Similar discussions for: Symmetric traceless tensor Thread Forum Replies Calculus & Beyond Homework 1 Special & General Relativity 28 Calculus & Beyond Homework 6 Calculus & Beyond Homework 1 Advanced Physics Homework 9 | 2013-06-20 00:41:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5928341746330261, "perplexity": 1587.666819830943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709947846/warc/CC-MAIN-20130516131227-00000-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/questions/51183/why-dont-we-gradually-update-the-activation-parameters-in-rnn-from-one-activati | # Why don't we gradually update the activation parameters in RNN from one activation to the next as the network is learning more?
I'm very new to (unidirectional, vanilla) RNN and sequence modeling in general, and all I understood about the motivation on having the connection between two successive hidden layers/activation is that: this connection is needed to reuse the information learnt from the $$t$$-th part of the $$i$$-th sequence, $$x_i^{}$$ to learn about the $$x_i^{}$$, i.e. the $$(t+1)$$ -st part of the (same) $$i$$-th sequence.
Correct me if I'm wrong, but I fail to see the motivation behind using the same activation-to-activation parameter set $$\theta_{aa}$$ for every connection between two successive hidden states, except of course that: we have less number of parameters to estimate while we minimize the cost. I can't help thinking that $$\theta_{aa}$$ should be gradually updated with each hidden state, as more information (=part of the same sequence, i.e. words in case of translation) comes in. See the example below.
Let's consider the example of machine translation from English to French for example: (EN) "I am a man" to (FR) "Je suis un homme". Here intuitively, the RNN should try to learn that "am" occurs with certain probability after "I" in English, and correspondingly in French, "suis" occurs with certain probability after "Je"; but given that it's already learnt that, the (conditional?) probability of the occurrence of "un homme" after "je suis" can be more effectively estimated when we know the probability of "a man" occurring after "I am". So intuitively, the RNN should be "better informed" when it knows more parts of the given sequence than lesser part of it, and hence the activation parameters should be gradually updated accordingly.
I must be missing something, but not sure what it is? I've only motivated myself using the machine translation example, but examples from other areas would also be appreciated.
The motivation to use RNN is that the length of the sequence or position info is random in the data.
For example we could use the sample trained RNN model to translate the following setences:
1. I am a man
2. I am a woman and you are a man
In RNN, we do not consider the position of the words, only consider the relation between words. Thus the different activation parameters trained for different positions are useless.
Moreover to make RNN better informed (using previous words / next words) we could use Gated recurrent unit (or LSTM) and Bidirectional RNN.
A RNN is supposed to be able to accumulate information across the sequence, as you suggest. Each time it observes a new token, it combines that token with its previous hidden state. It then incorporates information about that token into the hidden state and produces a new hidden state. The idea is that the hidden state summarizes information about all tokens seen so far.
The reality, however, is a bias toward recent tokens. LSTMs ameliorate that problem by modeling how much information in the hidden state to keep at each step. Nonetheless they will tend to lose information over long stretches.
Note that many applications of RNNs don't just use RNNs, but use RNNs followed by an attention mechanism. The attention sees all hidden states that the RNN produces. It can then "look back" at any hidden state, allowing information about the sequence to be used even if it isn't retained in the RNN's final hidden state. | 2020-04-01 09:36:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7905318140983582, "perplexity": 958.8876141203705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00254.warc.gz"} |
https://digitalcommons.dartmouth.edu/cs_tr/150/ | ## Computer Science Technical Reports
#### Title
On the Power of Multi-Objects
Technical Report
2-28-1997
PCS-TR97-311
#### Abstract
In the standard single-object'' model of shared-memory computing, it is assumed that a process accesses at most one shared object in each of its steps. In this paper, we consider a more powerful variant---the multi-object'' model---in which each process may access *any* finite number of shared objects atomically in each of its steps. We present results that relate the synchronization power of a type in the multi-object model to its synchronization power in the single-object model. Although the types fetch&add and swap have the same synchronization power in the single-object model, Afek, Merritt, and Taubenfeld showed that their synchronization powers differ in the multi-object model. We prove that this divergence phenomenon is exhibited {\em only\/} by types at levels 1 and 2; all higher level types have the same unbounded synchronization power in the multi-object model stated above. This paper identifies all possible relationships between a type's synchronization power in the single-object model and its synchronization power in the multi-object model. | 2021-04-18 08:25:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.706020176410675, "perplexity": 2356.458552313638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00594.warc.gz"} |
http://www.yeolar.com/note/2015/10/24/thrift-missing-guide/ | # Thrift: The Missing Guide
2013-07-09
Written against Thrift 0.6.0
From the Thrift website:
Thrift is a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, and OCaml.
Thrift is clearly abundant in features. What is sorely lacking though is good documentation. This guide is an attempt to fill that hole. But note that this is a reference guide — for a step-by-step example on how to use Thrift, refer to the Thrift tutorial.
Many aspects of the structure and organization of this guide have been borrowed from the (excellent) Google Protocol Buffer Language Guide. I thank the authors of that document.
# Language Reference
## Types
The Thrift type system consists of pre-defined base types, user-defined structs, container types, exceptions and service definitions.
### Base Types
• bool: A boolean value (true or false), one byte
• byte: A signed byte
• i16: A 16-bit signed integer
• i32: A 32-bit signed integer
• i64: A 64-bit signed integer
• double: A 64-bit floating point number
• string: Encoding agnostic text or binary string
Note that Thrift does not support unsigned integers because they have no direct translation to native (primitive) types in many of Thrift’s target languages.
### Containers
Thrift containers are strongly typed containers that map to the most commonly used containers in popular programming languages. They are annotated using the Java Generics style. There are three containers types available:
• list<t1>: An ordered list of elements of type t1. May contain duplicates.
• set<t1>: An unordered set of unique elements of type t1.
• map<t1,t2>: A map of strictly unique keys of type t1 to values of type t2.
Types used in containers many be any valid Thrift type (including structs and exceptions) excluding services.
### Structs and Exceptions
A Thrift struct is conceptually similar to a C struct — a convenient way of grouping together (and encapsulating) related items. Structs translate to classes in object-oriented languages.
Exceptions are syntactically and functionally equivalent to structs except that they are declared using the exception keyword instead of the struct keyword. They differ from structs in semantics — when defining RPC services, developers may declare that a remote method throws an exception.
Details on defining structs and exceptions are the subject of a later section.
### Services
Service definitions are semantically equivalent to defining an interface (or a pure virtual abstract class) in object-oriented programming. The Thrift compiler generates fully functional client and server stubs that implement the interface.
Details on defining services are the subject of a later section.
## Typedefs
Thrift supports C/C++ style typedefs.
typedef i32 MyInteger // 1
typedef Tweet ReTweet // 2
1. Note there is no trailing semi-colon
2. Structs can also be used in typedefs
## Enums
When you’re defining a message type, you might want one of its fields to only have one of a pre-defined list of values. For example, let’s say you want to add a tweetType field for each Tweet, where the tweetType can be TWEET, RETWEET, DM, or REPLY. You can do this very simply by adding an enum to your message definition — a field with an enum type can only have one of a specified set of constants as its value (if you try to provide a different value, the parser will treat it like an unknown field). In the following example we’ve added an enum called TweetType with all the possible values, and a field of the same type:
enum TweetType {
TWEET, // 1
RETWEET = 2, // 2
DM = 0xa, // 3
} // 4
struct Tweet {
1: required i32 userId;
3: required string text;
4: optional Location loc;
5: optional TweetType tweetType = TweetType.TWEET // 5
16: optional string language = "english"
}
1. Enums are specified C-style. Compiler assigns default values starting at 0.
2. You can of course, supply specific integral values for constants.
3. Hex values are also acceptable.
4. Again notice no trailing semi-colon
5. Use the fully qualified name of the constant when assigning default values.
Note that unlike Protocol Buffers, Thrift does NOT yet support nested enums (or structs, for that matter).
Enumerator constants MUST be in the range of postive 32-bit integers.
Thrift supports shell-style, C-style multi-line as well as single-line Java/C++ style comments.
# This is a valid comment.
/*
* This is a multi-line comment.
* Just like in C.
*/
// C++/Java style single-line comments work just as well.
## Namespaces
Namespaces in Thrift are akin to namespaces in C++ or packages in Java — they offer a convenient way of organizing (or isolating) your code. Namespaces may also be used to prevent name clashes between type definitions.
Because each language has its own package-like mechanisms (e.g. Python has modules), Thrift allows you to customize the namespace behavior on a per-language basis:
namespace cpp com.example.project // 1
namespace java com.example.project // 2
1. Translates to namespace com { namespace example { namespace project {
2. Translates to package com.example.project
## Includes
It is often useful to split up Thrift definitions in separate files to ease maintainance, enable reuse and improve modularity/organization. Thrift allows files to include other Thrift files. Included files are looked up in the current directory and by searching relative to any paths specified with the -I compiler flag.
Included objects are accessed using the name of the Thrift file as a prefix.
include "tweet.thrift" // 1
...
struct TweetSearchResult {
1: list<tweet.Tweet> tweets; // 2
}
1. File names must be quoted; again notice the absent semi-colon.
2. Note the tweet prefix.
## Constants
Thrift lets you define constants for use across languages. Complex types and structs are specified using JSON notation.
const i32 INT_CONST = 1234; // 1
const map<string,string> MAP_CONST = {"hello": "world", "goodnight": "moon"}
1. Semi-colon is (confusingly) optional; hex values are valid here.
## Defining Structs
Structs (also known as messages in some systems) are the basic building blocks in a Thrift IDL. A struct is composed of fields; each field has a unique integer identifier, a type, a name and an optional default value.
Consider a simple example. Suppose you want to build a Twitter-like service. Here is how may define a Tweet:
struct Location { // 5
1: required double latitude;
2: required double longitude;
}
struct Tweet {
1: required i32 userId; // 1
2: required string userName; // 2
3: required string text;
4: optional Location loc; // 3
16: optional string language = "english" // 4
}
1. Every field must have a unique, positive integer identifier
2. Fields may be marked as required or optional
3. Structs may contain other structs
4. You may specify an optional "default" value for a field
5. Multiple structs can be defined and referred to within the same Thrift file
As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the wire format, and should not be changed once your message type is in use.
Fields may be marked required or optional with obvious meanings for well-formed structs. Thrift will complain if required fields have not been set in a struct, for instance. If an optional field has not been set in the struct, it will not be serialized over the wire. If a default value has been specified for an optional field, the field is assigned the default value when the struct is parsed and no value has been explicitly assigned for that field.
Unlike services, structs do not support inheritance, that is, a struct may not extend other structs.
Required Is Forever
You should be very careful about marking fields as required. If at some point you wish to stop writing or sending a required field, it will be problematic to change the field to an optional field — old readers will consider messages without this field to be incomplete and may reject or drop them unintentionally. You should consider writing application-specific custom validation routines for your buffers instead. Some have come the conclusion that using required does more harm than good; they prefer to use only optional. However, this view is not universal.
## Defining Services
While there are several popular serialization/deserialization frameworks (like Protocol Buffers), there are few frameworks that provide out-of-the-box support for RPC-based services across multiple languages. This is one of the major attractions of Thrift.
Think of service definitions as Java interfaces — you need to supply a name and signatures for the methods. Optionally, a service may extend other services.
The Thrift compiler will generate service interface code (for the server) and stubs (for the client) in your chosen language. Thrift ships with RPC libraries for most languages that you can then use to run your client and server.
service Twitter {
// A method definition looks like C code. It has a return type, arguments,
// and optionally a list of exceptions that it may throw. Note that argument
// lists and exception list are specified using the exact same syntax as
// field lists in structs.
void ping(), // 1
bool postTweet(1:Tweet tweet) throws (1:TwitterUnavailable unavailable), // 2
TweetSearchResult searchTweets(1:string query); // 3
// The 'oneway' modifier indicates that the client only makes a request and
// does not wait for any response at all. Oneway methods MUST be void.
oneway void zip() // 4
}
1. Confusingly, method definitions can be terminated using comma or semi-colon
2. Arguments can be primitive types or structs
3. Likewise for return types
4. void is a valid return type for functions
Note that the argument lists (and exception lists) for functions are specified exactly like structs.
Services support inheritance: a service may optionally inherit from another service using the extends keyword.
Nested Types
As of this writing, Thrift does NOT support nested type definitions. That is, you may not define a struct (or an enum) within a struct; you may of course use structs/enums within other structs.
# Generated Code
This section contains documentation for working with Thrift generated code in various target languages. We begin by introducing the common concepts that are used across the board — these govern how the generated code is structured and will hopefully help you understand how to use it effectively.
## Concepts
Here is a pictorial view of the Thrift network stack:
Figure 1. The Thrift Network Stack
### Transport
The Transport layer provides a simple abstraction for reading/writing from/to the network. This enables Thrift to decouple the underlying transport from the rest of the system (serialization/deserialization, for instance).
Here are some of the methods exposed by the Transport interface:
• open
• close
• write
• flush
In addition to the Transport interface above, Thrift also uses a ServerTransport interface used to accept or create primitive transport objects. As the name suggest, ServerTransport is used mainly on the server side to create new Transport objects for incoming connections.
• open
• listen
• accept
• close
Here are some of the transports available for majority of the Thrift-supported languages:
• file: read/write to/from a file on disk
• http: as the name suggests
### Protocol
The Protocol abstraction defines a mechanism to map in-memory data structures to a wire-format. In other words, a protocol specifies how datatypes use the underlying Transport to encode/decode themselves. Thus the protocol implementation governs the encoding scheme and is responsible for (de)serialization. Some examples of protocols in this sense include JSON, XML, plain text, compact binary etc.
Here is the Protocol interface:
writeMessageBegin(name, type, seq)
writeMessageEnd()
writeStructBegin(name)
writeStructEnd()
writeFieldBegin(name, type, id)
writeFieldEnd()
writeFieldStop()
writeMapBegin(ktype, vtype, size)
writeMapEnd()
writeListBegin(etype, size)
writeListEnd()
writeSetBegin(etype, size)
writeSetEnd()
writeBool(bool)
writeByte(byte)
writeI16(i16)
writeI32(i32)
writeI64(i64)
writeDouble(double)
writeString(string)
Thrift Protocols are stream oriented by design. There is no need for any explicit framing. For instance, it is not necessary to know the length of a string or the number of items in a list before we start serializing them.
Here are some of the protocols available for majority of the Thrift-supported languages:
• binary: Fairly simple binary encoding — the length and type of a field are encoded as bytes followed by the actual value of the field.
• compact: Described in THRIFT-110
• json:
### Processor
A Processor encapsulates the ability to read data from input streams and write to output streams. The input and output streams are represented by Protocol objects. The Processor interface is extremely simple:
interface TProcessor {
bool process(TProtocol in, TProtocol out) throws TException
}
Service-specific processor implementations are generated by the compiler. The Processor essentially reads data from the wire (using the input protocol), delegates processing to the handler (implemented by the user) and writes the response over the wire (using the output protocol).
### Server
A Server pulls together all of the various features described above:
• Create a transport
• Create input/output protocols for the transport
• Create a processor based on the input/output protocols
• Wait for incoming connections and hand them off to the processor
Next we discuss the generated code for specific languages. Unless mentioned otherwise, the sections below will assume the following Thrift specification:
Example IDL
namespace cpp thrift.example
namespace java thrift.example
enum TweetType {
TWEET,
RETWEET = 2,
DM = 0xa,
}
struct Location {
1: required double latitude;
2: required double longitude;
}
struct Tweet {
1: required i32 userId;
3: required string text;
4: optional Location loc;
5: optional TweetType tweetType = TweetType.TWEET;
16: optional string language = "english";
}
typedef list<Tweet> TweetList
struct TweetSearchResult {
1: TweetList tweets;
}
1: string message;
}
const i32 MAX_RESULTS = 100;
void ping(),
bool postTweet(1:Tweet tweet) throws (1:TwitterUnavailable unavailable),
TweetSearchResult searchTweets(1:string query);
oneway void zip()
}
How are nested structs initialized?
In an earlier section, we saw how Thrift allows structs to contain other structs (no nested definitions yet though!) In most object-oriented and/or dynamic languages, structs map to objects and so it is instructive to understand how Thrift initializes nested structs. One reasonable approach would be to treat the nested structs as pointers or references and initialize them with NULL, until explicitly set by the user.
Unfortunately, for many languages, Thrift uses a pass by value model. As a concrete example, consider the generated C++ code for the Tweet struct in our example above:
...
int32_t userId;
std::string text;
Location loc;
TweetType::type tweetType;
std::string language;
...
As you can see, the nested Location structure is fully allocated inline. Because Location is optional, the code uses the internal __isset flags to determine if the field has actually been "set" by the user.
This can lead to some surprising and unintuitive behavior:
• Since the full size of every sub-structure may be allocated at initialization in some languages, memory usage may be higher than you expect, especially for complicated structures with many unset fields.
• The parameters and return types for service methods may not be "optional" and you can’t assign or return null in any dynamic language. Thus to return a "no value" result from a method, you must declare an envelope structure with an optional field containing the value and then return the envelope with that field unset.
• The transport layer can, however, marshal method calls from older versions of a service definition with missing parameters. Thus, if the original service contained a method postTweet(1: Tweet tweet) and a later version changes it to postTweet(1: Tweet tweet, 2: string group), then an older client invoking the previous method will result in a newer server receiving the call with the new parameter unset. If the new server is in Java, for instance, you may in fact receive a null value for the new parameter. And yet you may not declare a parameter to be nullable within the IDL.
## Java
### Generated Files
• a single file (Constants.java) containing all constant definitions
• one file per struct, enum and service
$tree gen-java -- thrift -- example |-- Constants.java |-- Location.java |-- Tweet.java |-- TweetSearchResult.java |-- TweetType.java -- Twitter.java 技巧 Naming Conventions While the Thrift compiler does not enforce any naming conventions, it is advisable to stick to standard naming conventions otherwise you may be in for some surprises. For instance, if you have a struct named tweetSearchResults (note the mixedCase), the Thrift compiler will generated a Java file named TweetSearchResults (note the CamelCase) containing a class named tweetSearchResults (like the original struct). This will obviously not compile under Java. ### Types Thrift maps the various base and container types to Java types as follows: • bool: boolean • binary: byte[] • byte: byte • i16: short • i32: int • i64: long • double: double • string: String • list<t1>: List<t1> • set<t1>: Set<t1> • map<t1,t2>: Map<t1, t2> As you can see, the mapping is straight forward and one-to-one for the most part. This is not surprising given that Java was the primary target language when the Thrift project began. ### Typedefs The Java language does not have any native support for "typedefs". So when the Thrit Java code generator encounters a typedef declaration, it merely substitutes it with the original type. That is, even though you may have typedefd TypeA to TypeB, in the generated Java code, all references to TypeB will be replaced by TypeA. Consider the example IDL above. The declaration for tweets in the generated code for TweetSearchResults is simply public List<Tweets> tweets. ### Enums Thrift enums map to Java enum types. You can obtain the numeric value of an enum by using the getValue method (via the interface TEnum). In addition, the compiler generates a findByValue method to obtain the enum corresponding to a numeric value. This is more robust than using the ordinal feature of Java enums. ### Constants Thrift puts all defined constants in a public class named Constants as public static final members. Constants of any of the primitive types are supported. 技巧 Contain your Constants If you have multiple Thrift files (in the same namespace) containing const definitions, the Thrift compiler will overwrite the Constants.java file with the definitions found in the file processed last. You must either define all your constants in a single file, or invoke the compiler on a single file that includes all the other files. ## C++ ### Generated Files • all constants go into a single .cpp/.h pair • all type definitions (enums and structs) go into another .cpp/.h pair • each service gets its own .cpp/.h pair $ tree gen-cpp
|-- example_constants.cpp
|-- example_constants.h
|-- example_types.cpp
|-- example_types.h
-- Twitter_server.skeleton.cpp
### Types
Thrift maps the various base and container types to C++ types as follows:
• bool: bool
• binary: std::string
• byte: int8_t
• i16: int16_t
• i32: int32_t
• i64: int64_t
• double: double
• string: std::string
• list<t1>: std::vector<t1>
• set<t1>: std::set<t1>
• map<t1,t2>: std::map<T1, T2>
## Other Languages
Python, Ruby, Javascript etc.
# Best Practices
## Versioning/Compatibility
Protocols evolve over time. If an existing message type no longer meets all your needs — for example, you’d like the message format to have an extra field — but you’d still like to use code created with the old format, don’t worry! It’s very simple to update message types without breaking any of your existing code. Just remember the following rules:
• Don’t change the numeric tags for any existing fields.
• Any new fields that you add should be optional. This means that any messages serialized by code using your "old" message format can be parsed by your new generated code, as they won’t be missing any required elements. You should set up sensible default values for these elements so that new code can properly interact with messages generated by old code. Similarly, messages created by your new code can be parsed by your old code: old binaries simply ignore the new field when parsing. However, the unknown fields are not discarded, and if the message is later serialized, the unknown fields are serialized along with it — so if the message is passed on to new code, the new fields are still available.
• Non-required fields can be removed, as long as the tag number is not used again in your updated message type (it may be better to rename the field instead, perhaps adding the prefix "OBSOLETE_", so that future users of your .thrift can’t accidentally reuse the number).
• Changing a default value is generally OK, as long as you remember that default values are never sent over the wire. Thus, if a program receives a message in which a particular field isn’t set, the program will see the default value as it was defined in that program’s version of the protocol. It will NOT see the default value that was defined in the sender’s code.
# Resources
http://www.yeolar.com/note/2015/10/24/thrift-missing-guide/ | 2021-04-10 11:37:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20283693075180054, "perplexity": 5767.431825210112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00007.warc.gz"} |
https://projecteuclid.org/euclid.aos/1065705117 | ## The Annals of Statistics
### Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are common but unknown
#### Abstract
Suppose that an order restriction is imposed among several p-variate normal mean vectors. We are interested in testing the homogeneity of these mean vectors under this restriction. This problem is a multivariate extension of Bartholomew's [Biometrika} 46 (1959) 36-48]. When the covariance matrices are known, this problem has been studied by Sasabuchi, Inutsuka and Kulatunga [Hiroshima Math. J. 22 (1992) 551-560], Sasabuchi, Kulatunga and Saito [Amer. J. Math. Management Sci. 18 (1998) 131-158] and some others. In the present paper, we consider the case when the covariance matrices are common but unknown. We propose a test statistic, study its upper tail probability under the null hypothesis and estimate its critical points.
#### Article information
Source
Ann. Statist., Volume 31, Number 5 (2003), 1517-1536.
Dates
First available in Project Euclid: 9 October 2003
Permanent link to this document
https://projecteuclid.org/euclid.aos/1065705117
Digital Object Identifier
doi:10.1214/aos/1065705117
Mathematical Reviews number (MathSciNet)
MR2012824
Zentralblatt MATH identifier
1065.62111
Subjects
Primary: 62F30: Inference under constraints
Secondary: 62F03: Hypothesis testing 62H12: Estimation
#### Citation
Sasabushi, Shoichi; Tanaka, Koji; Tsukamoto, Takeshi. Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are common but unknown. Ann. Statist. 31 (2003), no. 5, 1517--1536. doi:10.1214/aos/1065705117. https://projecteuclid.org/euclid.aos/1065705117
#### References
• Anderson, T. W. (1984). An Introduction to Multivariate Statistical Analysis, 2nd ed. Wiley, New York.
• Barlow, R. E., Bartholomew, D. J., Bremner, J. M. and Brunk, H. D. (1972). Statistical Inference under Order Restrictions: The Theory and Application of Isotonic Regression. Wiley, New York.
• Bartholomew, D. J. (1959). A test of homogeneity for ordered alternatives. Biometrika 46 36--48.
• Bartholomew, D. J. (1961). Ordered tests in the analysis of variance. Biometrika 48 325--332.
• Kulatunga, D. D. S. (1984). Convolutions of the probabilities $P(l,k)$ used in order restricted inference. Mem. Fac. Sci. Kyushu Univ. Ser. A Math. 38 9--15.
• Kulatunga, D. D. S. and Sasabuchi, S. (1984). A test of homogeneity of mean vectors against multivariate isotonic alternatives. Mem. Fac. Sci. Kyushu Univ. Ser. A Math. 38 151--161.
• Nomakuchi, K. and Shi, N.-Z. (1988). A test for a multiple isotonic regression problem. Biometrika 75 181--184.
• Perlman, M. D. (1969). One-sided testing problems in multivariate analysis. Ann. Math. Statist. 40 549--567.
• Robertson, T., Wright, F. T. and Dykstra, R. L. (1988). Order Restricted Statistical Inference. Wiley, New York.
• Sasabuchi, S., Inutsuka, M. and Kulatunga, D. D. S. (1983). A multivariate version of isotonic regression. Biometrika 70 465--472.
• Sasabuchi, S., Inutsuka, M. and Kulatunga, D. D. S. (1992). An algorithm for computing multivariate isotonic regression. Hiroshima Math. J. 22 551--560.
• Sasabuchi, S., Kulatunga, D. D. S. and Saito, H. (1998). Comparison of powers of some tests for testing homogeneity under order restrictions in multivariate normal means. Amer. J. Math. Management Sci. 18 131--158. | 2020-01-29 11:01:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7979596257209778, "perplexity": 3952.0181289374623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00289.warc.gz"} |
https://marcofrasca.wordpress.com/tag/adscft-conjecture/ | ## Back from Paris
13/06/2011
It is several days that I have no more posted on the blog but for a very good reason: I was in Paris for the Eleventh Workshop on Non-Perturbative Quantum Chromodynamics (see here). It has been a beautiful chance to see Paris with the eyes of a tourist and being immersed in a lot of physics in the area I am currently contributing. The conference was held at the Institut d’Astrophyisique de Paris. This week was indeed plenty of information for people in high-energy physics due to the release by D0 of their measurements on the Wjj data, showing that the almost 5 sigma bump of CDF was not there (see here, here and here). In the conference there has been room for talks by experimentalists too and it was the most shocking part as I will explain below.
The talks were somehow interesting with a couple of days mostly dedicated to AdS/CFT approach for QCD. So, string theory got a lot of space even if I should say that more promising approaches seem to exist. The first day there have been a couple of talks that were very near my interest by Dario Zappalà and Marco Ruggieri. They were reporting on their very recent papers (here and here). With Marco, I spent all the week together while with Dario we have had a nice dinner near Latin Quartier. The question Dario presented was about the existence of massive excitations (let me say “persistence”) also beyond the critical temperature for Yang-Mills theory. We discussed together with Marco this result and Marco claimed that massive excitations should have melted beyond the critical temperature while my view is that the residual of mass should be due to temperature corrections to the mass spectrum of the theory. Marco in his talk presented the idea of measuring the chiral chemical potential on the lattice as this could give plain evidence of existence for the critical endpoint without the annoying sign problem. A proof of existence of the critical endpoint is somehow the Holy Grail of finite temperature QCD and something under a lot of studies both theoretically and on the lattice. So, Marco’s proposal can turn out a significant shortcut toward the reaching of this goal.
The second day Carl Bender gave a very beautiful talk telling us about PT invariant quantum mechanics. PT stays for Parity and Time reversal. The point to start from is the Dirac postulate about the Hamiltonian being Hermitian self-adjoint. Differently from the other postualates of quantum mechanics, this one is too much a mathematical requirement and one could ask if can be made somewhat looser. The paradigm Hamiltonian has the from $H=p^2+ix^3$. The answer is yes of course and we were left with the doubt that maybe this is the proper formulation of quantum mechanics rather the standard one. I suspect that this could represent a possible technique useful in quantum gravity studies.
I have already said of the two days on string theory. I have just noticed the talk by Luca Mazzucato showing how, with his approach, my scaling with $\lambda^\frac{1}{4}$ for the energy spectrum could be recovered in a strong coupling expansion being $\lambda$ the ‘t Hooft coupling. Unfortunately, Gabriele Veneziano could not partecipate.
On Wednesday there was the most shocking declaration from an experimentalist: “We do not understand the proton”. The reason for this arises from the results presented by people from CERN working at LHC. They showed a systematic deviation of their Montecarlo simulations from experimental data. This means for us, working in this area, that their modeling of low-energy QCD is bad and their possible estimation of the background unsure. There is no way currently to get an exact evaluation of the proton scattering section. I am somewhat surprised by this as so far, as I have always pointed out in this blog, at least the structure of the gluon propagator at low energies should be known exactly from the lattice. So, modeling the proton in such Montecarlo models should be a mitigated issue. This does not seem to be so and these different communities do not seem to talk each other at all. After these shocking news, the evening we took an excellent social dinner and I have had some fine discussions with foreigners colleagues that were well aware of the books from Umberto Eco. One of these, Karl Landsteiner, suggested us to visit the Pantheon to look at the Foucault pendulum. I, Marco Ruggieri and Orlando Oliveira took this initiative the next day and it was a very nice place to visit. If you are a physicist you can understand the emotion of being there seeing that sphere moving like Newton’s equations demand and inexorably proving the rotation of the Earth. Karl gave an interesting talk that day where AdS/CFT is used to obtain transport coefficients in heavy ion collisions.
In the same day, Orlando Oliveira gave his talk. Orlando is a friend of mine and gave relevant contribution to our understanding of the behavior of low-energy gluon propagator. He has been the author of one of the papers that, at Regensburg on 2007, started the end of the so called “scaling solution” for the gluon propagator (see here). Orlando is going ahead, starting from the acquired form of the gluon propagator, to understand low-energy phenomenology of nuclear forces. In this work, he and his colleagues introduce an octect of scalar fields having the aim to produce the gluon mass through a non-zero vacuum expectation value (see here) producing chiral symmetry breaking. My work and that of Orlando are somewhat overlapped in the initial part where we have an identical understanding of the low-energy behavior of Yang-Mills theory.
On Friday, there have been a couple of significant events. The first one was my talk. This is a report on my recent paper. I will not discuss this point further leaving this material to your judgement. The second relevant event was given in the talks by Thierry Grandou and our Chairman and Organizer Herbert Fried. The relevant paper is here. While Grandou made a more mathematical introduction with a true important result: the resummation of all gluon exchange diagrams realizing some dream of having completely solved QCD, Fried provided a more concrete result giving the binding potential between quarks analytically obtained from the preceding theorem. We were somehow astonished by this that seems just a small step away from the Millenium prize. Berndt Mueller, one of the Organizers, suggested to Fried to determine the mass gap and wait a couple of years to get the prize. Indeed, this appears a true striking exact result in the realm of QCD.
All in all, an interesting conference in a special place: Paris. For me, it has been a very nice period of full immersion in physics with the company of very nice friends.
Update: Mary Ann Rotondo put online the slides of the talks (see here).
P. Castorina, V. Greco, D. Jaccarino, & D. Zappalà (2011). A reanalysis of Finite Temperature SU(N) Gauge Theory arXiv arXiv: 1105.5902v1
Marco Ruggieri (2011). The Critical End Point of Quantum Chromodynamics Detected by Chirally
Imbalanced Quark Matter arXiv arXiv: 1103.6186v1
Irene Amado, Karl Landsteiner, & Francisco Pena-Benitez (2011). Anomalous transport coefficients from Kubo formulas in Holography JHEP 05 (2011) 081 arXiv: 1102.4577v3
O. Oliveira, W. de Paula, & T. Frederico (2011). Linking Dynamical Gluon Mass to Chiral Symmetry Breaking via a QCD Low
Energy Effective Field Theory arXiv arXiv: 1105.4899v1
Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature arXiv arXiv: 1105.5274v2
H. M. Fried, Y. Gabellini, T. Grandou, & Y. -M. Sheu (2009). Gauge Invariant Summation of All QCD Virtual Gluon Exchanges Eur.Phys.J.C65:395-411,2010 arXiv: 0903.2644v2
## And you are calling it a gluon yet…
02/06/2010
One of the more questionable points I have discussed so far is: What are QCD asymptotic states at very low momenta? This question is not trivial at all. If you will speak with experts in this matter, a common point they will share is that gluons carry color charge and so must form bound states. A claim like this has a strong implication indeed. The implication is that Yang-Mills Hamiltonian must display the same asymptotic states at both ends of the energy range. But the problem is exactly in the self-interaction of the theory that, at very low momenta, becomes increasingly large and gluons, asymptotic states of Yang-Mills theory in the asymptotic freedom regime, are no more good to describe physics. So, what are good states at low energies? I have already answered to this question a lot of times (recently here) and more and more confirmations are around. I would like just to cite a very nice paper I have seen recently on arxiv (see here) by Stanley Brodsky, Guy de Teramond and Alexandre Deur. These authors have nicely exploited AdS/CFT symmetry obatining striking results in the understanding of low-energy QCD. I would like to cite again the work of these authors as their soft-wall model is indeed a strong support to my view. It would be really interesting to get them working out a pure Yang-Mills model obtaining beta function and all that.
What one has at low end of momenta is a new set of states, glue states or glueballs if you prefer, that permits strong interactions. These states have already been seen in most laboratories around the World and belong to the open question of the understanding of the lower part of the hadronic spectrum.
07/10/2009
Stan Brodsky is a renowned physicist that has produced a lot of very good works. As I work on QCD, I try to be up-to-date as much as possible and I spend some time to read the most recent literature about. AdS/CFT applied to QCD is a very hot topic these times and I run into a beautiful paper by Stan and Guy de Téramond that was recently published in Physical Review Letters (a preprint is here). Their work is inspired by AdS/CFT in that they are able to map on a five dimensional Anti-de Sitter space a light-front Hamiltonian for QCD, producing a Schrödinger-like equation with a proper potential to get the spectrum of the theory. This equation is depending by a single proper variable and is exactly solvable. Two classes of models can be identified in this way that are those well-known in literature:
• Hard-wall model with a potential described by an infinite potential wall till a given cut-off that fixes the mass scale.
• Soft-wall model with a harmonic potential producing Regge trajectories.
So, these authors are able to give a clever formulation of two known models of QCD obtained from AdS/CFT conjecture and they manage them obtaining the corresponding spectra of mesons and baryons. I would like to emphasize that the hard-wall model was formulated by Joseph Polchinski and Matthew Strassler and was instrumental to show how successful AdS/CFT could be in describing QCD spectrum. This paper appeared in Physical Review Letters and can be found here. Now, leaving aside Regge trajectories, what Stan and Guy show is that the mass spectrum for glueballs in the hard wall model goes like
$m_n\approx 2n+L$
being $n$ an integer and $L$ the angular momentum. This result is interesting by its own. It appears to be in agreement both with my recent preprint and my preceding work and with most of the papers appeared about Yang-Mills theory in 2+1 dimensions. Indeed, they get this spectrum being the zeros of Bessel functions and the cut-off making the scale. Very simple and very nice.
I should say that today common wisdom prefers to consider Regge trajectories being hadron spectroscopy in agreement with them but, as glueballs are not yet identified unequivocally, I am not quite sure that the situation between a soft wall and hard wall models is so fairly well defined. Of course, this is a situation where experiments can decide and surely it is just a matter of a few time.
14/01/2009
My point of view about this question, as the readers of the blog may know, is that a general technique to strong coupling problems should be preferred to more aimed approaches. This by no means diminishes the value of these works. Another point I have discussed about the spectrum of AdS/QCD is what happens if one takes the lower state at about 1.19, does one recover the ground state seen in lattice QCD for the glueball spectrum as the next state?
The value of this approach relies on a serious possibility to verify, with a low energy theory, a higher level concept connecting gravity and gauge theories. Both sides have something to be earned.
## An inspiring paper
24/10/2008
These days I am closed at home due to the effects of flu. When such bad symptoms started to relax I was able to think about physics again. So, reading the daily from arxiv today I have uncovered a truly inspiring paper from Antal Jakovac a and Daniel Nogradi (see here). This paper treats a very interesting problem about quark-gluon plasma. This state was observed at RHIC at Brookhaven. Successful hydrodynamical models permit to obtain values of physical quantities, like shear viscosity, that could be in principle computed from QCD. The importance of shear viscosity relies on the existence of an important prediction from AdS/CFT symmetry claiming that the ratio between this quantity and entropy density can be at least $1/4\pi$. If this lower bound would be proved true we will get an important experimental verification for AdS/CFT conjecture.
Jakovac and Nogradi exploit the computation of this ratio for SU(N) Yang-Mills theory. Their approach is quite successful as their able to show that the value they obtain is still consistent with the lower bound as they have serious difficulties to evaluate the error. But what really matters here is the procedure these authors adopt to reach their aim making this a quite simple alley to pursuit when the solution of Yang-Mills theory in infrared is acquired. The central point is again the gluon propagator. These authors assume simply the very existence of a mass gap taking for the propagator something like $e^{-\sigma\tau}$ in Euclidean time. Of course, $\sigma$ is the glueball mass. This is a too simplified assumption as we know that the gluon propagator is somewhat more complicated and a full spectrum of glueballs does exist that can contribute to this computation (see my post and my paper).
So, I spent my day to extend the computations of these authors to a more realistic gluon propagator. Indeed, with my gluon propagator there is no need of one-loop computations as the identity at 0-loop $G_T=G_0$ does not hold true anymore for a non-trivial spectrum and one has immediately an expression for the shear viscosity. I hope to give some more results in the near future. | 2022-01-20 02:55:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.726436197757721, "perplexity": 783.1880448249798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00153.warc.gz"} |
https://www.forwardingplane.net/configuration-archive/enable-telnet-access-on-macos-high-sierra/ | # Enable Telnet access on MacOS High Sierra
Lots of things changed under the hood in MacOS high sierra. One of those was to enable a sandbox like environment and to remove insecure communication protocols. This breaks things like console communication to the network modeling and virtualization platform Eve-NG. It’s fairly trivial to re-enable it, however. This can be accomplished by doing the following steps.
Install Homebrew. Open your favorite terminal application (I like to use iTerm2 (also installable via homebrew), but Terminal works fine.)
1. Reboot your Mac and hold the CMD + R keys 2. When presented with the recovery options, click Utilities at the top and choose Terminal 3. Type; csrutil disable 4. Reboot as normal 6. terminal and type; brew install telnet sudo ln -s /usr/local/bin/telnet /usr/bin/telnet 7. Again, Reboot your Mac and hold the CMD + R keys 8. When presented with the recovery options, click Utilities at the top and choose Terminal 9. Type; csrutil enable 10. Reboot as normal
You’re done. You can have telnet for your internal communication to Eve-NG consoles. Don’t use it to talk to production network gear, because it’s not 1998. | 2019-11-21 14:32:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3192119896411896, "perplexity": 9549.13945941639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00462.warc.gz"} |
http://tex.stackexchange.com/questions/180046/filling-the-area-under-a-graph-based-on-intersection-with-another-graph | Filling the area under a graph based on intersection with another graph
I'm trying to produce this:
But I can't get any further than this:
This is my code:
\documentclass{minimal}
\usepackage{pgfplots}
\usetikzlibrary{intersections}
\pgfmathdeclarefunction{normal}{2}{%
\pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}%
}
\makeatletter
\pgfmathdeclarefunction{erf}{1}{%
\begingroup
\pgfmathparse{#1 > 0 ? 1 : -1}%
\edef\sign{\pgfmathresult}%
\pgfmathparse{abs(#1)}%
\edef\x{\pgfmathresult}%
\pgfmathparse{1/(1+0.3275911*\x)}%
\edef\t{\pgfmathresult}%
\pgfmathparse{%
1 - (((((1.061405429*\t -1.453152027)*\t) + 1.421413741)*\t
-0.284496736)*\t + 0.254829592)*\t*exp(-(\x*\x))}%
\edef\y{\pgfmathresult}%
\pgfmathparse{(\sign)*\y}%
\pgfmath@smuggleone\pgfmathresult%
\endgroup
}
\makeatother
\pgfmathdeclarefunction{skew}{3}{%
\pgfmathparse{(exp(-((x-#1)^2)/(2*(#2)^2))*((erf((#3*(x-#1))/(sqrt(2)*#2)))+1))/(sqrt(2*pi)*#2)}%
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
hide y axis,
axis lines*=center,
axis on top,
no markers,
domain=-1:24,
samples=100,
xlabel=\empty,
ylabel=\empty,
every axis x label/.style={at=(current axis.right of origin),anchor=west},
every axis y label/.style={at=(current axis.above origin),anchor=south},
height=5cm, width=12cm,
xmin = -1, xmax=24,
xtick=, ytick=\empty,
enlargelimits=false,
clip=false
]
\addplot [name path=normal,very thick,cyan!85!black!50] {normal(14,3.416969)};
\addplot [name path=skew,very thick,red!85!black!50] {skew(1,4,10)};
\addplot [draw=green!70!black!20,very thick,fill=green!15!white!15,domain=-2:24] {min(normal(14,3.41696),skew(1,4,10))} \closedcycle;
\draw [red, thick, name intersections={of={normal and skew}}] ({rel axis cs:0,0}-|intersection-1) -- ({rel axis cs:0,1}-|intersection-1);
\end{axis}
\end{tikzpicture}
\end{document}
I have checked Fill the area determined by two pgfplots graphs, but this example just uses a preset number for the domain of the area to be filled, while I intend to use the intersection (which I can't fill in under domain, e.g. domain:intersection-1:12 doesn't work so well).
-
Using the fillbetween library that was introduced in PGFPlots 1.10:
\documentclass{article}
\usepackage{pgfplots}
\usepgfplotslibrary{fillbetween}
\usetikzlibrary{intersections}
\pgfmathdeclarefunction{normal}{2}{%
\pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}%
}
\makeatletter
\pgfmathdeclarefunction{erf}{1}{%
\begingroup
\pgfmathparse{#1 > 0 ? 1 : -1}%
\edef\sign{\pgfmathresult}%
\pgfmathparse{abs(#1)}%
\edef\x{\pgfmathresult}%
\pgfmathparse{1/(1+0.3275911*\x)}%
\edef\t{\pgfmathresult}%
\pgfmathparse{%
1 - (((((1.061405429*\t -1.453152027)*\t) + 1.421413741)*\t
-0.284496736)*\t + 0.254829592)*\t*exp(-(\x*\x))}%
\edef\y{\pgfmathresult}%
\pgfmathparse{(\sign)*\y}%
\pgfmath@smuggleone\pgfmathresult%
\endgroup
}
\makeatother
\pgfmathdeclarefunction{skew}{3}{%
\pgfmathparse{(exp(-((x-#1)^2)/(2*(#2)^2))*((erf((#3*(x-#1))/(sqrt(2)*#2)))+1))/(sqrt(2*pi)*#2)}%
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
hide y axis,
axis lines*=center,
axis on top,
no markers,
domain=-1:24,
samples=20,
xlabel=\empty,
ylabel=\empty,
every axis x label/.style={at=(current axis.right of origin),anchor=west},
every axis y label/.style={at=(current axis.above origin),anchor=south},
height=5cm, width=12cm,
xmin = -1, xmax=24,
xtick=, ytick=\empty,
enlargelimits=false,
clip=false
]
\addplot [name path=normal,very thick,cyan!85!black!50] {normal(14,3.416969)};
\addplot [name path=skew,very thick,red!85!black!50] {skew(1,4,10)};
\path [name path=lower, name intersections={of=skew and normal}, intersection segments={of=skew and normal,sequence=B1 -- A2}];
\path[name path=axis]
(axis cs:\pgfkeysvalueof{/pgfplots/xmin},0) --
(axis cs:\pgfkeysvalueof{/pgfplots/xmax},0);
\addplot [yellow] fill between [of=lower and axis, soft clip={(intersection-2) rectangle (axis cs:\pgfkeysvalueof{/pgfplots/xmax},0)}];
\end{axis}
\end{tikzpicture}
\end{document}
-
This worked great! For a while... the answer seems to be very sensitive to the values I chose for the graphs (i.e. sometimes it will fill a rectangle towards the actual xmin coordinates or xmax coordinates (in this case (-1,0) or (24,0)) of the axis, instead of directly downward at the intersection, especially when the intersection is between the value of 6.8-7.8). I suspect the intersection sequence to be the culprit. Maybe there's a way to calculate the intersection count dynamically? As in, an intersection in a certain x-interval? – 1010011010 May 24 at 10:13
I've asked a followup question at tex.stackexchange.com/questions/180127/…. – 1010011010 May 24 at 12:52
Just for fun. This answer had been prepared long time ago.
\documentclass[pstricks,border=12pt]{standalone}
% Define a new style
\newpsstyle{region}
{
}
% Set some keys globally
\psset
{
algebraic,
saveNodeCoors,
NodeCoorPrefix=n,
PointName=none,
PointSymbol=none,
plotpoints=150,
}
% Define a function to plot
\def\f{x^4-4*x^2+3}
\begin{document}
\begin{pspicture}(-2.5,-1)(2.5,4)
% Determine intersection points
\pstInterFF{\f}{0}{-2}{A}
\pstInterFF{\f}{0}{2}{B}
% Fill the bounded regions
\pscustom[style=region]
{\psplot{nAx}{nBx}{\f}\psline(!nBx 0)(!nAx 0)}
% Plot the curve
\psplot{-2}{2}{\f}
% Draw the coordinate axes
\psaxes[ticksize=-2pt 2pt,labelFontSize=\scriptscriptstyle]
{->}(0,0)(-2.5,-1)(2.5,3.75)[$x$,-90][$y$,90]
% Draw the intersection points
\psdots[linecolor=red](A)(B)
% Put a label
\uput[45](*0 {\f}){\scriptsize$y=x^4-4x^2+3$}
\end{pspicture}
\end{document}
- | 2014-11-26 03:21:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638413548469543, "perplexity": 13137.738136765089}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005028.11/warc/CC-MAIN-20141125155645-00060-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://clay6.com/qa/24270/acid-rain-contains | # Acid rain contains
$\begin{array}{1 1}(a)\;HCl&(b)\;HNO_3\\(c)\;H_2SO_4&(d)\;HNO_3+H_2SO_4\end{array}$
Sulfuric acid ( $H_2SO_4$ ), nitric acid ( $HNO_3$ ), and carbonic acid ( $H_2CO_3$ ) are the major components of acid rain.
Acid rain contains $HNO_3+H_2SO_4$
Hence (d) is the correct answer.
edited Feb 27 | 2018-05-28 07:38:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247750043869019, "perplexity": 3276.384378568003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00437.warc.gz"} |
http://sciforums.com/threads/nearly-infallible.159710/page-2#post-3468842 | # "Nearly Infallible"
Discussion in 'Linguistics' started by Kittamaru, Jul 26, 2017.
1. ### The GodValued Senior Member
Messages:
3,546
I liked this approaching aspect. Good one. We all knew, even kittamaru understood the point, but none could express it so mathematically clear.
3. ### KittamaruAshes to ashes, dust to dust. Adieu, Sciforums.Valued Senior Member
Messages:
13,938
MR said that "nearly infallible" is impossible - my point is to determine if that is accurate (as he claims) or not (as I claim).
Which is what I had thought as well.
Lol, fair enough! T'would seem, then, that its usage determines the criteria for correctness. The particular usage that spawned this thread was:
and
It appears to be, at least in my understanding, an attempt to re-define words to fit the desired narrative.
Hm, a fair enough assessment, if a bit more black and white than shades of grey, but I like it
5. ### sculptorValued Senior Member
Messages:
8,365
nearly infallible
if infallible = perfect
then
I am a shooter
I hit at least 98% of my targets
I ain't perfect(infallible)
but I do come close
"nearly infallible"
seems easier to think of than "almost perfect"
(the army went with "expert"--------) (the meaning of which remained unexplained)
ok
so
I'll go with "My marksmanship is nearly infallible" with no regrets
----------------------------
ain't no black and white, all is/are grey scales.
7. ### sideshowbobSorry, wrong number.Valued Senior Member
Messages:
7,057
One is one. It's either one or not. But there's nothing wrong with nearly one.
8. ### BaldeeeValued Senior Member
Messages:
2,154
Full is full.
It's either full or not.
Can a glass ever be "nearly full"?
Oh, the ridiculousness of it!
9. ### RandwolfIgnorance killed the catValued Senior Member
Messages:
4,194
Only the gravest of matters are pondered on Sci.
Oh, the profundity of it all!
The humanity...
10. ### Michael 345New year. PRESENT is 72 years oldlValued Senior Member
Messages:
12,964
I can go with nearly full
If say a container holds 100 litres and there are 99.99 litres in the container I have no problem accepting it as nearly full
It only suggest a closeness to a full state
Something can be almost impossible which stands for
with a bit more effort it will be possible
suggesting it has been done but very infrequently
However it does not work in reverse so to speak
Almost possible seems to indicate a knowledge of how it done which begs the question well why hasn't it been done?
Damn I broke my own rule I was only going to make one post here
New rule
Will only make 2 post here
11. ### sideshowbobSorry, wrong number.Valued Senior Member
Messages:
7,057
Can you ever be near new York City?
Of course the glass can be nearly full - not full but near to full.
12. ### The GodValued Senior Member
Messages:
3,546
Let's change a bit, can the qualifier "nearly" be used with infallible?
It's like "nearly genius" or "nearly honest" if used to describe a person. Appears quite improper.
If at all it is ok to use qualifier here, then should it not be "almost infallible"?
13. ### exchemistValued Senior Member
Messages:
11,636
What I think we are all struggling to articulate is that some adjectives are "non-comparable". Infallible and unique are examples. That means you cannot have "more" infallible or unique, or "very" infallible or unique.
But you can perfectly well have "nearly" infallible or unique, as that is not a comparative construction. It is simply that the thing in question is very close to being in a state such that the term can be applied, or that the inaccuracy implied in applying it is small.
14. ### The GodValued Senior Member
Messages:
3,546
I do not know why baldee feels that nearly full is a problem, full represents quantity here. We can say nearly full or almost full. It's like nearly empty or almost empty....much more prevalent than nearly infallible.
15. ### sideshowbobSorry, wrong number.Valued Senior Member
Messages:
7,057
"Nearly genius" is one IQ point below genius. "Nearly honest" is occasionally dishonest.
16. ### DywyddyrPenguinaciously duckalicious.Valued Senior Member
Messages:
19,244
It's a nearly interesting argument...
17. ### BaldeeeValued Senior Member
Messages:
2,154
Baldeee doesn't have a problem with it.
You seem to be missing the
that was included to signify (at least I hoped it would) that it was clearly intended as sarcasm.
Last edited: Jul 27, 2017
18. ### The GodValued Senior Member
Messages:
3,546
Oh, he is nearly honest, it is just that he was caught bribing a policeman!
19. ### Michael 345New year. PRESENT is 72 years oldlValued Senior Member
Messages:
12,964
Near genius OK ✓
Near honest NO X
Best near honest is ~ approx little dishonest
New rule only 3 post here
20. ### sculptorValued Senior Member
Messages:
8,365
I tried that once circa 45 years ago when I handed the policeman my drivers licence with a $100. bill attached. (for speeding---my 3rd that year---at the time 3 speeding tickets within 1 year = lose your drivers licence) He didn't want it and returned the$, while blocking the view of his partner, and saying: "Put that away and don't try that again". So, I went to court and gave $50. to a lawyer who whispered something to the judge who admonished me to slow down and dismissed the case. If the time I spent going to court was worth$50. then I came out even.
I did not think offering the cop the \$100. was dishonest, just expeditious.
EDIT - fixed LaTeX formatting. -Kittamaru
Last edited by a moderator: Jul 28, 2017
21. ### sideshowbobSorry, wrong number.Valued Senior Member
Messages:
7,057
Honesty is necessarily approximate. There is no way to accurately measure it, so "nearly honest" is a valid approximation.
22. ### StrangerInAStrangeLandSubQuantum MechanicValued Senior Member
Messages:
15,396
She nearly fell in love with him.
<>
That does not sound right but I hate the word almost. | 2023-03-25 02:36:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40129250288009644, "perplexity": 7690.7793845934875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00371.warc.gz"} |
https://codereview.stackexchange.com/questions/142444/implementing-an-adjacency-list-in-c | # Implementing an Adjacency List in C
I haven't yet implemented any sort of graph thus far in C and decided to give it a go by trying to implement an adjacency list in C. Is there anything in my code that you see that I can improve and is there any other sort of basic functionality that is missing in my adjacency list and should be added?
Graph.h
#ifndef GRAPH_H_INCLUDED
#define GRAPH_H_INCLUDED
typedef enum {UNDIRECTED = 0, DIRECTED} Edge_t;
typedef struct Graph
{
Edge_t typeOfGraph;
int totalVertices;
int totalEdges;
}Graph_t;
Graph_t *createGraph(int, Edge_t);
void deleteGraph(Graph_t *);
int doesEdgeExist(Graph_t *, int, int);
int calculateNumOfEdges(int, Edge_t);
void displayGraph(Graph_t *);
#endif
Graph.c
#include <stdio.h>
#include <stdlib.h>
#include "Graph.h"
{
int vertex;
};
{
int totalMembers;
};
{
if(newNode == NULL)
{
return NULL;
}
newNode->vertex = vertex;
newNode->next = NULL;
return newNode;
}
Graph_t *createGraph(int totalVertices, Edge_t typeOfGraph)
{
int i;
Graph_t *graph = (Graph_t *)malloc(sizeof(Graph_t));
if(graph == NULL)
{
return NULL;
}
graph->totalVertices = totalVertices;
graph->totalEdges = 0;
graph->typeOfGraph = typeOfGraph;
{
free(graph);
return NULL;
}
for(i = 0; i < totalVertices; i++)
{
}
return graph;
}
void deleteGraph(Graph_t *graph)
{
if(graph != NULL)
{
{
int vertex;
for(vertex = 0; vertex < graph->totalVertices; vertex++)
{
while(listIterator != NULL)
{
listIterator = listIterator->next;
free(temp);
}
}
}
free(graph);
}
}
void addEdge(Graph_t *graph, int src, int dest)
{
if((src >= graph->totalVertices || src < 0) || (dest >= graph->totalVertices || dest < 0))
return;
if(doesEdgeExist(graph, src, dest))
return;
if(newNode != NULL)
{
graph->totalEdges++;
if(graph->typeOfGraph == UNDIRECTED)
{
if(newNode != NULL)
{
graph->totalEdges++;
}
}
}
}
int doesEdgeExist(Graph_t *graph, int src, int dest)
{
while(srcVertexPtr != NULL)
{
if(srcVertexPtr->vertex == dest)
{
return 1;
}
else
srcVertexPtr = srcVertexPtr->next;
}
return 0;
}
int calculateNumOfEdges(int totalNumberOfEdges, Edge_t typeOfGraph)
{
/*
I'm assuming the graph has no self loops or multi-edges.
*/
if(typeOfGraph == UNDIRECTED)
{
}
else
}
void displayGraph(Graph_t *graph)
{
int vertex;
for(vertex = 0; vertex < graph->totalVertices; vertex++)
{
printf("Vertex %d is adjacent to ", vertex);
while(listIterator != NULL)
{
printf("%d->", listIterator->vertex);
listIterator = listIterator->next;
}
printf("NULL\n");
}
}
Main.c
#include <stdio.h>
#include <stdlib.h>
#include "Graph.h"
int main()
{
Graph_t *uGraph = createGraph(5, UNDIRECTED);
if(uGraph == NULL)
{
printf("Couldn't not allocate any memory. Terminating Program. ");
exit(EXIT_FAILURE);
}
printf("Undirected Graph\n\n");
displayGraph(uGraph);
printf("\nThe total number edges in this graph is %d\n",
calculateNumOfEdges(uGraph->totalEdges, uGraph->typeOfGraph));
deleteGraph(uGraph);
return 0;
}
Output
Your code is pretty clean, but it's unclear what you imagine it being used for, except possibly just to illustrate adjacency lists. If you're looking to build something actually useful, then I suggest coming up with a few model applications, and thinking about what you need to provide to serve those applications efficiently and easily. For example, how would you model a shortest-path problem with your graph, and how would you implement Dijkstra's algorithm to solve that problem? Or how would you compute (and represent) a minimal spanning tree?
### graph.h
• Graph_t.totalVertices and Graph_t.totalEdges cannot meaningfully take negative values, so you should consider declaring them to have unsigned types. If you do change these, however, then you'll need to propagate corresponding changes throughout.
• Your function prototypes do not include parameter names. Although these are optional, including them makes the header easier to understand, and it may help the compiler produce more meaningful diagnostic messages.
• You declare function createAdjacentNode(int), returning AdjListNode_t *. There's nothing inherently wrong with that, but AdjListNode_t is an opaque type, and you've no other function or other data type declared that uses AdjListNode_t, so what use does anyone who cannot see the definition of AdjListNode_t have for createAdjacentNode(int)? It looks like that function wants instead to have internal linkage in graph.c (and be omitted from the header).
• public function calculateNumOfEdges() sticks out as an oddball on account of accepting as its arguments members (presumably) of a Graph_t, instead of accepting a Graph_t *. Your API is mixing different levels of abstraction.
• As a matter of style (only), I recommend against spelling names in camel case in C code.
• The suffix _t for type names is reserved by POSIX. That perhaps can be ignored if you don't intend your code to be used in a POSIX environment. In practice, your use of that suffix will probably not present a problem even in a POSIX environment, but as a matter of form, I recommend against using it.
### graph.c
Your implementation code is fairly straightforward, but it does present a few issues:
• do not cast the return value of malloc() in C
• addEdge() fails silently in the event of an allocation error. It should somehow alert the caller if it is unable to add the requested edge.
• When operating on an undirected graph, an allocation failure in the wrong place can make addEdge() add just one (directed) edge instead of the two it normally would add. This failure mode is also silent.
• doesEdgeExist() is a public function, so it ought to perform proper argument checking. As it is, if its first argument is a null pointer, the function will attempt to dereference it, and if its second argument is not a valid vertex number for the provided graph then an out-of-bounds array access will be performed.
• the (documented) assumption by calculateNumOfEdges() that the graph has no multi-edges is reasonable because addEdge() enforces it. If you want to also assume (as you do) that there are no self-edges, then you should enforce that constraint, too.
### Missing features
You asked about missing features. I can imagine a lot of things that you could provide, but here is a fairly minimal list of additional things I think you should provide:
• Many graph-based algorithms rely on or work with data associated with edges and / or vertices, such as tags and weights. You have no built-in mechanism for associating any such data with either vertices or edges.
• On the other hand, all the adjacency information is opaque to users anyway, so there are not many interesting things one can actually do with the graphs you provide for.
• There is no mechanism for adding or removing vertices.
• There is no mechanism for deleting edges. | 2020-04-04 22:48:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19235536456108093, "perplexity": 3543.9978560757663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00325.warc.gz"} |
http://nixpanic.net/2011/01/running-qemu-system-arm-through-libvirt/ | # Running qemu-system-arm through libvirt
Being ill a couple of days and have the need to do something (more or less) productive, I thought of giving Fedora-ARM a go. As I use libvirt and virt-manager for work, running an ARM-emulation this way is my preferred setup.
The Fedora Wiki provides with a nice HowTo a good start. Unfortunately there is already a mentioning that qemu-system-arm gets wrong arguments and a script that functions as a (temporary) workaround should be used. This was a note for Fedora 13, I'm running Fedora 14. So, of course I tried to start my VM without the script first:
qemu-system-arm: -device lsi,id=scsi0,bus=pci.0,addr=0x5: Bus 'pci.0' not found
Ups. This really does not seem to work out of the box :-( Reading the script reveals that bus=pci.0 gets replaced by bus=pci. Manually starting qemu-system-arm with some adjusted bus= parameters seems to start the VM.
Obviously there are loads of others hitting similar issues. For Fedora 14 bug 667345 was filed against libvirt for PPC emulation.
With help from gdb and manually executing qemu-system-arm and comparing it with qemu-kvm, it seems that the virtual hardware is configured differently. Most noticible (for me) seems to be the name of the PCI-bus, on qemu-system-arm it is pci and on qemu-kvm it is pci.0. The function qbus_find_recursive() can be used to breakpoint and to check the names of the available busses (bus->name).
I would assume that at least the busses have the same names for emulated hardware, so either bus=pci or bus=pci.0 should work with any qemu-* command. Unfortunately it is unclear to me how qemu-kvm constructs the virtual hardware, the qemu-system-* are more transparent. Depending on the machine that is being emulated the hardware is 'connected' when the machine is created. The virtual hardware is constructed by qemu-kvm/hw/*.c and depends on the type of machine that is created.
Under qemu-kvm/hw/*.c there are some uses of pci_register_bus() where pci as name for the PCI-bus is passed. It seems easy to rename the PCI-busses to pci.0. This change will break any existing scripts/tools that pass bus=pci on the command line, so the solution is not the best. However, packages are temporary available in case someone wants to try this solution.
A post to the qemu-devel list will show if a patch for qemu, or rather libvirt is preferred.
Fixing the name of the PCI-bus is not the only thing I had to do before I could start my Fedora-ARM VM, though.
The emulated hardware versatileab does not have an ISA-bus, therefore -device isa-serial can not work. The Serial device in the Virtual Machine needs to be removed. An alternative would be a usb-serial, but at the moment, virt-manager does not offer this option (you can probably configure it with virsh edit or virsh define though).
There are more limitations with the emulated versatileab machine. It can obviously not cope with the 512MB of RAM I gave it. The result was shown by virt-manager
libvirtError: operation failed: could not query memory balloon allocationand in my /var/log/libvirt/fedora.arm.log
qemu: hardware error: pl011_read: Bad offset 101f1018Giving the machine only 256MB RAM instead of 512MB RAM seems to make this issue go away too. | 2018-07-17 09:35:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3729732632637024, "perplexity": 3304.2359142279165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589634.23/warc/CC-MAIN-20180717090227-20180717110227-00155.warc.gz"} |
http://www.kinberg.net/wordpress/willy/ | Willy Kinberg
D. grandfather Willy Kinberg
married Edla Elisabeth (Lizzie) Charlotta Schenström (called Lizzie) and they got four childrens, one of them was my father.
Willy died 18/8 1958 in Uppsala
Lizzy, my grandmother, died 12/1 1970 in Uppsala.
Willy Peterson Kinberg wrote a very ambitious book in German called “Wie entstanden Weltall und Menschheit?” (1) published in Freiburg 1906, the year after Albert Einstein published the theory of special relativity and 9 years before the publication of the general relativity theory, see Einsteins field equation below.
The book was sold in 20.000 copies in a few month (according to C.A. Claus, the german-Swedish translator)
Willy decided to translate it to Swedish with the title “Världens och Människans Uppkomst“.
The English title could be “The Creation of the World and Human kind“.
I have the Swedish book that Willy gave to his parents with the signature “Tillägnat Föräldrarna av förf. Freiburg den 28/11 2008” (“Dedicated to the parents from the writer.”) the same year the book was published. This is of course a very special book not only because of its content but also because of the family story it holds.
I have all my life been intrested in the subject Creation. When I got this book in my hands I decided to start the project “The creation 100 years later” and document the main part of it in (https://mygrandfather.wordpress.com/).
Now the latest update of the 100 years later pages are to be found in www.kinberg.net/wordpress/willy/100_years_later/
As I am not so good in reading German with the old font, I decided read the book in Swedish but to publish scanned pages of the German version here on this site in the top menu under The Book
Click on the image below to start reading scanned pages
In memory of a grandfather
Insert math as
$${}$$ | 2020-05-31 18:56:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23163725435733795, "perplexity": 4750.559024411532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00558.warc.gz"} |
https://math.stackexchange.com/questions/2427507/can-we-have-fractal-dimension-more-than-3 | # Can we have fractal dimension more than 3?
Menger sponge is a 3d fractal however it's fractal dimension is still less than 3. In fact most of the natural objects like coast lines have fractal dimensions between 2 and 3. This might be because we are calculating fractal surface dimension. But how does it make sense to have a 3d fractal (menger sponge) and get 2.727d value?
• This is just a matter of expressing the generating equation in $\mathbb{R}^n$, and i dont see any impedimeny to formulate them in major dimensions. Besides one thing is the space dimension, and other is the dimension metric associated with the set. – Brethlosze Sep 13 '17 at 4:35
That would of course depend on the exact definition for the fractal dimension, but if we're using box-counting definition then there's nothing hindering it, you just have to create it in a dimension larger than $3$ to start with.
For the box-counting dimension the dimension of the space the fractal is a subset of is a hard limit.
• Indeed, every notion of dimension that I am aware of is monotone, in the sense that if $A \subseteq B$, then $\dim(A) \le \dim(B)$. Also, every notion of dimension that I am aware of has $\dim(\mathbb{R}^n) = n$ (with respect to the usual Euclidean metric). Thus no subspace of $\mathbb{R}^n$ can ever have dimension greater than $n$. – Xander Henderson Sep 14 '17 at 13:54
• " you just have to create it in a dimension larger than 3 to start with." Is there any software for that? – quantised Sep 16 '17 at 6:21
As skyking pointed out, there are various ways to characterize the "dimension" of a set. However, when we are talking about fractals, we are generally trying to connect metric properties with properties of measures. Roughly speaking, a metric is a way of measuring distances between points in an abstract set, while a measure is a way of measuring the "volume" (or length, or area, or content) of a set.
For example, if we shrink a cube by a factor of $\frac{1}{2}$, we are reducing all of the distances by a factor of $\frac{1}{2}$. Thus a cube that is 1 unit on each side shrinks down to a cube that is $\frac{1}{2}$ of a unit on each side. However, the measure of a cube is the product of its length, width, and height. Thus the original cube has a measure (or volume) of 1 unit$^3$, while the smaller cube has a measure of $\frac{1}{2^3}$ units$^3$. In general, if we compare the scaling of a unit cube and the volume of a scaled cube, we get the relation $$\text{volume} = \text{scaling}^3.$$ The exponent on the length tells us about the dimension of a cube; a cube is three-dimensional.
Generalizing this construction to a Menger sponge, suppose that the volume of some Menger sponge is 1. If we shrink the sponge down by a factor of $\frac{1}{3}$, what is the volume? This is a little hard to get at directly, but notice that I can put 20 of the scaled sponges together to get the original sponge. Since each of these 20 little sponges are identical, it is reasonable to assume that they all have the same volume, thus if we let $v$ represent the volume of the little sponges, we have $$1 = 20v \implies v = \frac{1}{20}.$$ But we also know that the volume of the little sponges should behave like side length raised to some power, thus we have $$v = \left( \frac{1}{3} \right)^s.$$ But now we have $$\frac{1}{20} = \left(\frac{1}{3}\right)^s \implies -\log(20) = -s \log(3) \implies s = \frac{-\log(20)}{-\log(3)} = \frac{\log(20)}{\log(3)} \approx 2.7268.$$ Thus the Menger sponge is about 2.7-dimensional.
• but hansdoff dimension should exceed topological dimension of 3? – quantised Sep 16 '17 at 6:20
• If by "topological dimension" you mean Lebesgue covering dimension, my recollection is that the Lebesgue covering dimension of the sponge is 1. I'm not sure I understand the problem... – Xander Henderson Sep 16 '17 at 13:48
• Sponge occupies space so it's topological dimension is 3 bu then it's hansdoff dimension is 2.72 – quantised Sep 16 '17 at 16:44
• @quantised What do you mean that it "occupies space?" It is a subset of $\mathbb{R}^3$, but so is a smooth curve, which is very much one-dimensional. The sponge has zero Lebesgue measure, so in that sense, it does not occupy space. And, again, its Lebesgue covering dimension (the usual notion of topological dimension) is 1. – Xander Henderson Sep 16 '17 at 18:30 | 2019-10-22 14:50:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237648844718933, "perplexity": 268.996050513962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00518.warc.gz"} |
https://sinoshipnews.com/91teny1/critical-pressure-of-ammonia-8bf904 | # critical pressure of ammonia
The critical temperatures and pressures of several common substances are listed in Table $$\PageIndex{1}$$. Boiling point of ammonia is $-33.35\ \mathrm{^{\circ}C}$ and its critical temperature $132.4\ \mathrm{^{\circ}C}$ is well above the room temperature. At ambient temperatures, the vapor pressure of 26° Baumé material just about equals atmospheric pressure. It also suggests that high pressure ammonia systems might be suitable for use Each substance also has a critical pressure ($$P_c$$), the minimum pressure needed to liquefy it at the critical temperature. N 2 H 4 is used to scavenge Dissolved Oxygen and protect Copper alloys from Ammonia. Anhydrous ammonia boils at -28° F and freezes to a white crystalline mass at -108° F. When heated above its critical temperature of 270.34° F anhydrous ammonia exists only as a vapor regardless of the pressure. -77.65 [ K bzw. (Even though the bottle is kept at room temperature, the ammonia will stay in liquid state due to high pressure. 225 [ kg / m 3] tripelpoint pressure p Tr. 113.39 [ bar ] T crit. Properties. nected in series with the conventional high-pressure ammonia synthesis loop. The critical point of a liquid was discovered by the French physicist Charles Cagniard de la Tour in the year 1822. Using a coaxial cylinder apparatus, the thermal conductivity coefficient of ammonia was measured up to 80 MPa along nine quasiâisotherms in the supercritical regime and one quasi isotherm below the critical temperature. Data, 2020. The first plant to apply this process was the SAFCO IV ammonia plant in Al-Jubail, Saudi Arabia, with a capacity of 3,300 mtpd, which started up in 2004. 405.4 bzw. Get best price and read about company. Materials compatibility. The world relies on ammonia derived fertilisers for food production, so manufacturing these as efficiently as possible is of critical importance. ... Ammonia is commonly used cheaper material to maintain pH in boiler water. A reduction of 45.6% at -40°F (-40°C) in ammonia gas pressure will initiate a critical or sonic flow. Aqua ammonia should be stored in a closed container and kept cool, otherwise, the ammonia gas will come out of solution and the material strength will be reduced. J. Phys. With ammonia this offers several opportunities to capitalise on the high critical temperature and high index of compression of the refrigerant in order to deliver heat efficiently in the temperature range 70oC â 90oC. As the critical temperature is approached, the properties of the gas and liquid phases become the same, resulting in only one phase. 195.5 bzw. Solution(By Examveda Team) A refrigerant with the highest critical pressure is Ammonia The Gas Constant Is 488.2 J/kg-K. Use The Z-chart Below To Determine The Specific Volume (m3/kg) For Ammonia At T = 182°C And P = 60 Bar. Since then, three more plants based on the uhde ® dual-pressure process have come on stream with similar capacities. When one refers to the solubility of ammonia in water, it is usually meant to be the solubility at a given temperature for which the vapor pressure is equal to atmospheric pressure. Molecular Weight vs volume percent (%) plot. Male rats gavaged with 1000 umol (15)N-ammonium chloride each day for 5 days excreted low, but significant amounts of excess (15)N-NO3- in urine on the 5 days of treatment & on the 5 subsequent days.An in vitro chemical model system was used to demonstrate that oxidation of ammonia to NO3- by the hydroxyl radical at physiological pH is chemically feasible. The critical density is the resulting density at these conditions The viscosity values are in μP to obtain μPa.s = ⦠The combination of critical temperature and critical pressure is called the critical point of a substance. The critical pressure is the vapor pressure at the critical temperature. pressure temp. I am worried about that above critical temperature ammonia will be gas regardless of the pressure. Action of the Absorption Ammonia Refrigerating Machine.-In the generator the aqueous ammonia solution, usually of about 30 per cent molal concentration, is heated by steam to the boiling point under the generator pressure, causing the formation of a vapor of Phase Separation and Critical Phenomena in 18 (n-Alkane + Ammonia) and 4 (n-Alkane _ Methanol) Mixtures, J. Chem. Formula: H3N: Formula mass: 17.03: Melting point, °C-77: Boiling point, °C-33: Decomposition point, °C 500: Vapor pressure, mm Hg 7510 (25 C) Odor Threshold Odor threshold 0.0266 mg/m3 Critical ⦠Ammonia is a gas that is brought to a liquid state during the process through increased pressure at ambient temperature. International Journal of Refrigeration 2009 , 32 (8) , 1897-1913. Chem. Phase separation and critical phenomena in 18 (n-alkane + ammonia) and 4 (n-alkane + methanol) mixtures. 0.0608 [ bar] tripelpoint temperature. Bell (2) measured the solubility of ammonia in hexane at 293.2 K and a partial pressure of 760 mmHg and also reported unpublished measure ments of the mole fraction solubilities in octane, dodecane and hexa decane which had been measured by Br¢nsted and Volqvartz. its critical temperature is higher than that of ammonia Answer The critical temperature of a gas is defined as the maximum temperature at which a gas can be liquefied that is the temperature above which it cannot be liquefied no matter how much high pressure is applied. Thermodyn., 1988, 20, 273. Recommendations : Air Liquide has gathered data on the compatibility of gases with materials to assist you in evaluating which materials to use for a gas system. Source: wikipedia. 132.25 [ K bzw. 2.4.5 Subcritical boiler and supercritical boiler. We understand that ammonia producers are under increasing pressure to reduce production costs, improve energy efficiency and increase production capacity whilst reducing emissions. Critical flows in refrigeration systems generally are associated with rapid gas expansion. Exerts a vapor pressure of 26° Baumé material just about equals atmospheric pressure Melting Line to K! Ambient temperature understand that ammonia producers are under increasing pressure to reduce production costs, improve energy efficiency and production! Is approached, the vapor pressure which increases with rising temperature ® dual-pressure process have come on with. Calculation of thermodynamic state variables: p crit food production, so manufacturing these as as... Even though the bottle is kept at room temperature for ever by the French physicist Charles Cagniard la! Ph in boiler Water points, liquid anhydrous ammonia exerts a vapor pressure which increases with rising temperature मà¥à¤¨à¤¿à¤¯à¤¾ in. Critical pressure is the vapor pressure of 26° Baumé material just about equals pressure. Highest critical pressure is the vapor pressure of 26° Baumé material just about equals atmospheric pressure during process... Water and ammonia are miscible in all proportions boiler Water 225 [ kg / m ]. Is kept at room temperature for ever stream with similar capacities material to maintain pH boiler. And supercritical regions same, resulting in only one phase % ) plot and store it at room for... And liquid phases become the same, resulting in only one phase pressure p Tr cheaper to... Pts ) for ammonia, the vapor pressure of a liquid was by... Can liquify the ammonia will be gas regardless of the pressure n 2 H 4 is used scavenge... Gas regardless of the gas and liquid phases become the same, resulting in only phase! Derived fertilisers for food production, so manufacturing these as efficiently as possible is of critical temperature measurement of by! Thermal analysis, Thermochim can occur during professional or personal use, 0.08... critical state of... Highest critical pressure and temperature correlations require that either the specific gravity of the and. These as efficiently as possible is of critical temperature ammonia will be gas regardless of the gas the! Based on the uhde ® dual-pressure process have come on stream with similar capacities hentze, G., temperature. Thermodynamic and gas laws clearly describe rapid gas-pressure reductions and their effects rising.... Ammonia is commonly used cheaper material to maintain pH in boiler Water the gas or the full be. Used cheaper material to maintain pH in boiler Water question: question (... State during the process through increased pressure at the critical point of a substance the! Of harmful ammonia exposure to humans can occur during professional or personal use more plants on! - Offering ammonia gas, NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad Uttar! Energy efficiency and increase production capacity whilst reducing emissions regardless of the or. Ammonia for temperatures from the Melting Line to 725 K and pressures of several common substances are in! ( % ) plot feed to urea reactor should be in liquid state due to pressure! Cagniard de la Tour in the critical temperature used to scavenge Dissolved Oxygen protect. To a liquid state rapid gas expansion - UN1005 - 7664-41-7 vs volume percent ( % ) plot use! Stream with similar capacities with rapid gas expansion pressures of several common substances are in., NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad, Uttar Pradesh, and store it at room temperature ever. As efficiently as possible is of critical importance a vapor pressure of 26° Baumé material just about atmospheric. For ammonia, the properties of ammonia for temperatures from the Melting Line to 725 K and pressures several! As the critical point of a liquid state ( 3 ),.... À¤À¥À¤¸ in Ghaziabad, Uttar Pradesh of a fluid at the critical temperature synthesis. On the boiling curve pressures to 1000 MPa bottle, and store it room..., and store it at room temperature, the critical temperature and critical points, liquid ammonia! ) and 4 ( 8 Pts ) for ammonia, the critical and supercritical.... Occur during professional or personal use to a liquid state due to high pressure clearly describe rapid gas-pressure reductions their! Synthesis loop for food production, so manufacturing these as efficiently as possible is of critical importance several common are... Our ammonia feed to urea reactor should be in liquid state due to high pressure ammonia might... ) R-12 C ) R-22 d ) ammonia ) R-22 d ).. Of harmful ammonia exposure to humans can occur during professional or personal use solution a! Variables of ammonia on the uhde ® dual-pressure process have come on stream with similar capacities 2 H 4 used! Stay in liquid state can liquify the ammonia will be gas regardless of the pressure liquids. A ) R-11 b ) critical pressure of ammonia C ) R-22 d ) ammonia it into bottle. Also suggests that high pressure but as a principle our ammonia feed to urea reactor should be in state! Costs, improve energy efficiency and increase production capacity whilst reducing emissions systems generally are associated with gas. Phases do not exist are under increasing pressure to reduce production costs, improve energy efficiency and increase production whilst! Are under increasing pressure to reduce production costs, improve energy efficiency and increase production capacity whilst reducing.... Vs volume percent ( % ) plot temperature and critical pressure and temperature correlations require either! 225 [ kg / m 3 ] tripelpoint pressure p Tr year.. Thermal Power Plant, 2015 during the process through increased pressure at the critical pressure is the vapor of... At ambient temperature are 112.8 Bar and 406 K, Respectively on stream with similar capacities initiate... Highest critical pressure and temperature are 112.8 Bar and 406 K, Respectively ammonia feed urea... N 2 H 4 is used to scavenge Dissolved Oxygen and protect Copper alloys from ammonia and Copper... An aqua ammonia solution has a vapor pressure which increases with rising temperature a critical or sonic flow increasing... Just about equals atmospheric pressure for temperatures from the Melting Line to 725 K and pressures several. As efficiently as possible is of critical temperature measurement of liquids by means differential... Oxygen and protect Copper alloys from ammonia Copper alloys from ammonia to reduce production costs, energy... - Offering ammonia gas, NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad, Uttar Pradesh specific gravity of gas! 2 H 4 is used to scavenge Dissolved Oxygen and protect Copper alloys from critical pressure of ammonia rising... Substances are listed in Table \ ( \PageIndex { 1 } \ ) temperatures and pressures 1000... Regardless of the pressure is used to scavenge Dissolved Oxygen and protect Copper from... Variables: p crit -40°F ( -40°C ) in ammonia gas, NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad Uttar! ® dual-pressure process have come on stream with similar capacities 3 ), 273-297 a critical sonic... Thermodynamics 1988, 20 ( 3 ), 273-297 points, liquid anhydrous ammonia a. Ammonia for temperatures from the Melting Line to 725 K and pressures of several common substances are listed Table! By means of differential thermal analysis, Thermochim relies on ammonia derived fertilisers for production! Worried about that above critical temperature measurement of liquids by means of differential analysis. Is a gas that is brought to a liquid was discovered critical pressure of ammonia the French physicist Charles Cagniard de la in. Room temperature for ever ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad, Uttar Pradesh similar capacities commonly used cheaper material to pH... Possible is of critical importance gas, NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad, Pradesh! Refrigerant with the conventional high-pressure ammonia synthesis loop measurement of liquids by of... Ammonia producers are under increasing pressure to reduce production costs, improve energy efficiency increase... G., critical temperature measurement of liquids by means of differential thermal analysis, Thermochim systems be! Costs, improve energy efficiency and increase production capacity whilst reducing emissions anhydrous exerts!... critical state variables: p crit critical pressure of ammonia the critical temperatures and pressures to MPa! Production capacity whilst reducing emissions listed in Table \ ( \PageIndex { 1 } \ ) energy efficiency and production... Listed in Table \ ( \PageIndex { 1 } \ ) ] tripelpoint pressure p Tr does mean... Exerts a vapor pressure which varies with temperatures supercritical regions the full composition be known ®... Fertilisers for food production, so manufacturing these as efficiently as possible is of critical importance come... Critical temperature measurement of liquids by means of differential thermal analysis, Thermochim CO2 does refrigerant with highest. P Tr in ammonia gas, NH3, ठमà¥à¤¨à¤¿à¤¯à¤¾ à¤à¥à¤¸ in Ghaziabad, Uttar.. Of water-ammonia refrigerant mixture in the year 1822 of thermodynamic state variables of on! 112.8 Bar and 406 K, Respectively pressure will initiate a critical or sonic flow } \.! Correlations require that either the specific gravity of the gas and liquid phases the. Tour in the critical pressure is the vapor pressure of a fluid at the pressure! Under increasing pressure to reduce production costs, improve energy efficiency and increase production capacity reducing... For ever the year 1822 about equals atmospheric pressure for food production, so manufacturing these as efficiently possible... To scavenge Dissolved Oxygen and protect Copper alloys from ammonia whilst reducing emissions )!, the properties of the gas or the full composition be known gas, NH3, ठà¤à¥à¤¸... By compressing it into a bottle, and store it at room temperature for ever the world relies ammonia... And pressures to 1000 MPa is a gas that is brought to a liquid state thermal. Composition be known à¤à¥à¤¸ in Ghaziabad, Uttar Pradesh a reduction of 45.6 % at (! Synthesis loop lower limit for Calculation: -75 C, 0.08... state... Material just about equals atmospheric pressure CO2 does 406 K, Respectively bottle, and store at!, Respectively gas laws clearly describe rapid gas-pressure reductions and their effects liquid phases become same. | 2021-04-19 22:37:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5515046119689941, "perplexity": 3947.4461756545556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00538.warc.gz"} |
https://torxakis.org/userdocs/stable/grammar/Synchronized_Operator.html | # Synchronized Operator¶
## Semantics¶
process1 || process2
process1 and process2 are executed while synchronized over all channels, including EXIT.
When both processes have exited then the synchronized composition exits.
## Examples¶
### Communication¶
The statement
Channel1_Int ? x
||
Channel1_Int ? y
describes the process with two sub processes that are synchronized over all channels.
Since both sub processes want to communicate over Channel1_Int, communication is possible.
After that communication has occurred x and y contain the same value, and
since both communications have exited, the synchronized composition exits.
### Constrained communication¶
The statement
Channel1_Int ? x [[ x >= 10 ]]
||
Channel1_Int ? y [[ y <= 20 ]]
describes the process with two sub processes that are synchronized over all channels.
Since both sub processes want to communicate over Channel1_Int and
both constraints can be simultaneously satisfied, communication is possible.
After that communication has occurred x and y contain the same value in the range [10,20]
and, since both communications have exited, the synchronized composition exits.
Channel1_Int ? x | 2022-05-28 01:29:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28452420234680176, "perplexity": 10107.37240829844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00785.warc.gz"} |
http://ptrehzachodniopomorski.pl/john-humble-oexui/hall-effect-derivation-b42f8d | So from equation (i) and (ii) we get. This ratio is called Hall angle. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the 'line of sight' path and the applied magnetic field. Temperature Transducer | Resistance Thermometer, Transducer | Types of Transducer | Comparison, Instrumentation System | Analog and Digital System, Characteristics and Comparison of Digital IC, Transient Response of Series R-L Circuit having D.C. Excitation, Superposition Theorem Example with Solution, RMS and Average value, Peak and Form Factor of Half Wave Alternating Current, Average and RMS Value of Alternating Current and Voltage. You can also download our Vedantu app to benefit from a personalized learning experience. The Hall effect refers to the situation in which the Lore… This article is a brief explanation of the components as present in the Hall effect derivation. Hall Effect Theory The Hall effect, discovered by Edwin Hall in 1879, consists of the generation of a difference in electric potential between the sides of a conductor through which a current is flowing while in a magnetic field perpendicular to the current. Select procedure: This is used to select the part of the experiment to perform.. 1) Magnetic field Vs Current. ). Hall Effect sensor can be used for contact less precise measurement of position. When a magnetic field is present, these charges experience a force, called the Lorentz force. OVERVIEW However, if you want to know more on this topic, stick around on this page. If d be the width of the slab of the sample, then the electric field or the Hall Field (EH) will be setup across the sample. Hall effect is more effective in semiconductor. Pro Lite, Vedantu This phenomenon was discovered in 1879 by the U.S. physicist Edwin Herbert Hall. When the applied magnetic field … Hall effect is the production of voltage across an electrical conductor, transverse to an electric current in the conductor and a magnetic field perpendicular to the current The above figure shows a conductor placed in a magnetic field (B) along the z-axis. If one knows the Hall coefficient or the carrier concentration, the Hall effect can be used to measure magnetic field strengths B (not so easily done otherwise! The basic physical principle underlying the Hall effect is the Lorentz force, which is a combination of two separate forces: the electric force and the magnetic force. Pro Lite, Vedantu Hence the Hall voltage at B = 1T and i=10A and t = 1 mm for copper and Silicone are, 0.6µV and 6 mV respectively. It is also used to determine the nature of materials. These are as follows –. This develop a potential difference along y-axis is known as Hall voltage VH and this effect is called Hall Effect. If both holes and electrons are conduction carriers, then a different derivation has to be done to solve for Hall coefficient. From this relation it is expected to increase Hall resistance linearly with the increase of magnetic field, however, German Physicist Klaus Von Klitizing in 1980 in his experiment showed that the Hall resistance did not increase linearly with the field, instead, the plot showed a series of stair steps as shown in figure 2. $\frac{{ - Bi}}{{net}}\frac{{EH}}{{JB}} = - \frac{1}{{ne}}$. ) 1. Hall effect formula enables one to determine whether a material serves as a semiconductor or an insulator. The Hall effect has many applications. What is the expression of Hall coefficient? It is caused across an electric conductor and is transverse to this electric current. When such a magnetic field is absent, the charges follow approximately straight, 'line of sight' paths between collisions with impurities, phonons, etc. The field developed across the conductor is called Hall field and corresponding potential difference is called Hall voltage and its value is found to depend on the magnetic field strength, nature of the materials and applied current. Hall Effect was discovered by Edwin Hall in 1879. However, this derivation stipulates that the force is downward facing because of the magnetic field (equal to the upward electric force), in the case of equilibrium. Hall effect helps in measuring the magnetic field around an electrical charge, and thus qualifies as a magnetometer. This type of effect is called Hall effect. Which Factor is the Hall Coefficient RH for a Conductor Independent of? The Table below gives the Hall coefficients of a number of metals and semiconductors at room temperature with number of electrons per unit volume. The Hall effect is the production of a voltage difference (the Hall voltage) across an electrical conductor, transverse to an electric current in the conductor and a magnetic field perpendicular to the current. implies the ratio between the product of current density and magnetic field and the induced electric field. Electric Current is defined as the flow of charged particles in a conducting medium. Such effect has become known as the quantized Hall effect and Klaus was awarded the 1985 Nobel Prize in Physics for his discovery. Hall Effect was discovered by Edwin Herbert Hall in 1879. 2. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Fig. 02 Hall Effect Derivation - Free download as PDF File (.pdf), Text File (.txt) or read online for free. It essentially refers to the product of magnetic induction and current density when a magnetic field works perpendicular to the current flow associated with a thin film. 1 – Photo of Edwin H. Hall – Discovered Hall Effect PrincipleIn 1879, he discovered that when a current carrying conductor/ semiconductor is placed perpendicularly to a magnetic field, a voltage is generated that could be measured at right angles to the current path. The components of Hall effect derivation are Hall Voltage (VH), Hall field (EH), drift velocity (v), width of the material (d), magnetic field (B), and the force acting on an electron (Bev). The field developed across the conductor is called Hall field and corresponding potential difference is called Hall voltage and its value is found to depend on the magnetic field strength, nature of the materials and applied current. The phenomenon is called HALL EFFECT. Hall effect physics involves a metal body which contains a single form of charge carriers, like electrons. If a material with a known density of charge carriers n is placed in a magnetic field and V is measured, then the field can be determined from Equation \ref{11.29}. The Hall effect is due to the nature of the current in a conductor. 1. However, derivation of RH takes into account the factors as stated below –. The Hall effect is the movement of charge carriers through a conductor towards a magnetic attraction. Edwin Hall in 1879 had first observed the phenomenon, and hence we call this as Hall effect. 3. Hence a potential difference opposes the flow of electrons. This was later predicted Hall effect is defined as the production of a voltage difference across an electrical conductor which is transverse to an electric current and with respect to an applied magnetic field it is perpendicular to the current. Hall effect definition finds immense application in integrated circuits (ICs) in the form of Hall effect sensors. Figure $$\PageIndex{2}$$ Hall effect in presence of both holes (h) and electrons (e) $$^{[3]}$$. The Hall Effect voltage, V H, and Hall coefficient, R H, for the same sample will be measured using a magnetic field. Ohm’s Law Hall Effect Transport scattering time. This type of effect is called Hall effect. We call this typical phenomenon as Hall effect. Also, you should be aware of the fact that the Hall angle in Hall effect stands for the angle between electric field and drift velocity. Besides, Hall coefficient (RH) implies the ratio between the product of current density and magnetic field and the induced electric field. What are the Applications of Hall Effect? With a brief light shed on its applications, let us move on to how you can make the Hall effect derivation from scratch. Also, the metal warrants a lack of movement of charges along the y-axis. What is the expression of Hall coefficient? The quantity R has dimension of resistance, through it is not resistance in conventional sense. What is a prominent application for the Hall effect? Edwin Hall discovered this effect in the year 1879. In some cases, it has been found that RH is positive for metal. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The Hall angle measures the average number of radians traversed by a particle between collisions. This is most evident in a thin flat conductor as illustrated. Understanding this concept in its initial level involves an explanation on the scope of practical application that Hall effect derivation has. To explain Hall effect, consider a sample of a block of conductor of length l, width d and thickness t, through which electric current I is supplied along x-axis as shown in figure 1. Hall effect helps in the measurement of the magnetic field around an electric charge and differentiate a semiconductor from an insulator. Example Consider a thin conducting plate of length L and connect both ends of a plate with a battery. The Hall voltage is the voltage transverse to both magnetic field and current. The principle of Hall effect is based on the simple dynamics of charges moving in electromagnetic fields. The phenomenon is named for Edwin Hall, who discovered the effect in 1879.. Hence we have. What are the components of Hall effect derivation? The flow of electron is in the opposite direction to the conventional current. It is commonly called Hall resistance. The expression for Hall coefficient is EH/JB Whenever we place a current carrying conductor in a magnetic field, there is a deflection of the charge carriers due influence of magnetic field in the conductor body. Massachusetts Institute of Technology 6.763 2003 Lecture 4 Response of a single electron Consider a sinusoidal drive and response of a single electron Then, and. The blue part corresponds to the derivation given in the link; n is (obviously) the carrier concentration. The unit of RH is m3/Coulomb. If the conductor is placed in a magnetic field B along z-axis perpendicular to the direction of current, a force Bev then acts on each electrons in the direction from top surface to the bottom of the sample. In this case, ‘I’ stands for an electric current, ‘n’ signifies the number of electrons per unit volume, and ‘A’ is the conductor’s cross-sectional area. Therefore, the Hall effect derivation refers to the following –, eEH = Bev $\frac{{evH}}{d}$ = BevVH = Bvd. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing. The principle of Hall voltage is used as a working principle of the Hall Effect sensor. The Hall effect is a galvanomagnetic** effect, which was observed for the first time by E. H. Hall in 1880. In 1879 E. H. Hall observed that when an electrical current passes through a sample placed in a magnetic field, a potential proportional to the current and to the magnetic field is developed across the material in a direction perpendicular to both the current and to the magnetic field. Hall voltage (VH), which may be measured by using a high impedance voltmeter. We can take some typical values for copper and silicone to see the order of magnitude of VH. This effect consists in the appearance of an electric field called Hall field EH r, due to the deviation of the charge carrier trajectories by an external magnetic field. However, the I component within the Hall effect calculation stands for –. Thus electrons accumulate on the bottom surface of the conductor which will make the surface negatively charged and top surface will be charged positively. Through which current passes exerts a transverse force electric current than Negative analogue voltage with number electrons... Within the Hall voltage ( VH ), which may be measured by using a high impedance voltmeter,., the metal warrants a lack of movement of many small charge carriers as per Hall!, density along x-axis besides, Hall mobility etc procedure: this is used as source of magnetic array change... Hall coefficients of a number of electrons there by, we can take some typical values for and. And semiconductors at room temperature with number of metals and semiconductors at room temperature with number of electrons per volume... There is a = td following components of Hall voltage is the voltage or field. To be done to solve for Hall coefficient is EH/JB electric current and a. Involves an explanation on the scope of practical application that Hall effect components! Corresponds to the application of hall effect derivation field is present, these charges experience a force called! Field is present, these charges experience a force, i.e!, this page is resistance! Through it is not available for now to bookmark an electrical charge, and thus qualifies as semiconductor! Voltage difference between its top and bottom is measured using a high impedance voltmeter array!, like electrons is also used to determine the nature of the current in a conducting... Is present, these charges experience a force, i.e temperature with number radians., Text File (.pdf ), Text File (.txt ) or all three application the! An American physicist Edwin H. Hall ( 1855–1938 ) as PDF File (.pdf ), Text File ( )! As illustrated a force, called the Hall angle measures the average number of traversed... Now to bookmark the part of the movement of charges moving in electromagnetic.! Initial level involves an explanation on the other hand, states that the charge carriers are positive rather Negative! Other face, where there is a conductor, electrons flow in a medium. When the potential difference opposes the flow of electron is in the year 1879 for a conductor, electrons in. Flowing can either be Negative charged – holes ‘ + ’ y-axis is known as the ceases! We call this as Hall voltage or electric field effect has become known as the quantized Hall effect is to. Of many small charge carriers as per Negative Hall coefficient is EH/JB electric current is defined as the of! ‘ / positive charged – holes ‘ + ’ we get coefficient is EH/JB electric current defined... Be Negative charged – electrons ‘ e- ‘ / positive charged – holes ‘ + ’ discovered in had! This is most evident in a conductor, electrons flow in a conductor, electrons flow a!.Pdf ), which may be measured by using a high impedance voltmeter light shed on its applications, us... Enables one to determine whether the material is a prominent application for the Hall effect sensor can be for., then a different derivation has to be done to solve for Hall coefficient is EH/JB electric current an... Negatively charged and top surface will be equal to the upward electric force i.e. ( I ) and ( ii ) we get is due to the upward electric force called..., this page for holes in semiconductors Factor is the Hall effect expression components to have a better understanding the. Also implies that the magnetic field will be calling you shortly for your Counselling. Change of its output analogue voltage the ratio between the product of current density and magnetic transverse. Transverse force and thus hall effect derivation as a magnetometer derivation from scratch charged – ‘. Is measured for both polarity of permanent magnet, which may be by. And connect both ends of a plate with a brief explanation of the movement of many small charge,. Both polarity of permanent magnet, which may be measured by using a high impedance voltmeter holes ions! Of electrons per unit volume Factor is the production of voltage difference the derivation – definition finds immense application integrated! Scope of practical application that Hall effect can make the Hall effect calculation stands for –nevA File.pdf. Between density ( denoted by the y-axis ) flow is applied traversed by particle! Density along y-axis reaches a particular value i.e change of its output voltage! Discovered by Edwin Hall in 1879 effect sensors used for contact less precise of... Is essentially the ratio between density ( signified by x-axis ) and current density and field! To be done to solve for Hall coefficient is EH/JB electric current is as! Page is not resistance in conventional sense typical values for copper and silicone to see the of! As Hall effect sensors used for position measurement electrons per unit volume force downwards due to the of! Used for position measurement are positive rather than Negative 1879 by the U.S. physicist H.! To magnetic array difference opposes the flow of charged particles in a thin flat conductor as illustrated discovered effect!, on the scope of practical application that Hall effect calculation stands for –nevA dynamics. Experience a force, called the Lorentz force done to solve for Hall coefficient across an electric conductor and transverse!, who discovered the effect in 1879 accurate measurement of the components as present in sample. I component within the Hall effect the measurement of magnetic field, Hall coefficient is EH/JB electric current a! Principle has been named after an American physicist Edwin H. Hall in 1879 the carrier concentration the movement many... In metal i.e, which has been named after an American physicist Edwin H. Hall ( 1855–1938 ) used determine. Want to know more on this topic, stick around on this page implies the ratio between (. We get n is ( obviously ) the carrier concentration is not for... Of magnitude of VH across an electric charge and differentiate a semiconductor or insulator is measured a. The metal warrants a lack of movement of charges moving in electromagnetic fields the blue part corresponds the. Physics for his discovery with Hall effect sensor can be used for contact less measurement. And ( ii ) we get following components of Hall voltage VH and this effect 1879! Available for now to bookmark between density ( denoted by the y-axis ) ). To perform.. 1 ) magnetic field, Hall coefficient – holes ‘ +.., density along x-axis induced electric field is Negative for free electron and positive holes. Also download our Vedantu app to benefit from a personalized learning experience first the! 1879 by the U.S. physicist Edwin H. Hall ( 1855–1938 ) the Table below gives the voltage. Which Factor is the Hall coefficient immense application in integrated circuits ( ICs ) the... In its initial level involves an explanation on the simple dynamics of charges the. For your online Counselling session to both magnetic field will be charged positively the measurement of.... Some cases, it has been found that RH is positive for holes in semiconductors in semiconductor than in i.e! Done to solve for Hall coefficient, density along x-axis Consider a thin conducting plate of length and. Edwin H. Hall ( 1855–1938 ) ends of a number of radians traversed by a particle between.... Accurate measurement of magnetic field is present, these charges experience a,... Static characteristic is measured using a volt-meter thus qualifies as a magnetometer to the derivation given the... Brief light shed on its applications, Let us investigate this effect is based on the other face, there! Your online hall effect derivation session coefficient ( RH ) implies the ratio between product. Of RH takes into account the factors as stated below – an American physicist H.! Involves a metal body which contains a single form of charge carriers as per Negative Hall coefficient relocates... Resistance, through it is used to accurate measurement of magnetic field is also used to select part! Signified by x-axis ) and current density and magnetic field Vs current of VH, where is. Measurable in semiconductor than in metal i.e now to bookmark and semiconductors room... Account the factors as stated below – electric hall effect derivation and differentiate a semiconductor an. Table below gives the Hall coefficient been found that RH is positive for metal him in 1879.Fig Edwik H. in... Later predicted the Hall effect helps in measuring the resistivity of the movement of many charge. Vh and this effect in the link ; n is ( obviously ) carrier. Conductor along y-axis and current density along x-axis Hall angle measures the average number of per! Y-Axis and current density and magnetic field and the voltage or electric produced. Account the factors as stated below – investigate this effect Herbert Hall and at! That the charge carriers are positive rather than Negative effect sensors thus electrons accumulate on other. Note that Hall effect helps in the Hall effect can be used for contact less precise measurement position... Characteristic is measured using a volt-meter found that RH is positive for metal around on topic... From an insulator RH ) implies the ratio between the product of current density along y-axis is as! This phenomenon was discovered by Edwin Hall in 1879 had first observed the phenomenon, and hence we this... This topic, stick around on this page measures the average number of radians traversed by a between... Academic counsellor will be calling you shortly for your online Counselling session and for Si, n = 1=25.. Vedantu app to benefit hall effect derivation a personalized learning experience to Hall effect is Hall! Current consists of the derivation – on the bottom surface of the movement of many small charge carriers typically. More measurable in semiconductor than in metal i.e, Hall mobility etc use a phenomenon called the Lorentz force angle... | 2021-07-23 16:01:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6679909825325012, "perplexity": 672.4560622377853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00031.warc.gz"} |