problem
stringlengths 26
131k
| labels
class label 2
classes |
|---|---|
Mac(os x): Is there a way to install ONLY redis-cli? : <p>I tried to run <code>brew install redis-cli</code> and googled, but found nothing. Any ideas?</p>
| 0debug
|
Is there a less blunt way of bring base class methods into a child class? : <p>Consider</p>
<pre><code>struct Base
{
int foo(int);
int foo(int, int);
};
struct Child : Base
{
using Base::foo;
int foo(int, int, int);
};
</code></pre>
<p>Ideally I want to bring the <code>Base</code> class <code>foo</code> that takes only one <code>int</code> as a parameter into the <code>Child</code> class, and not the one that takes 2 <code>int</code>s. Is there a way I can do that? My writing <code>using Base::foo;</code> brings in both <code>foo</code> methods of <code>Base</code>.</p>
| 0debug
|
Is it possible to register a Flutter app as an Android Intent Filter and to handle Incoming Intents? : <p>One can launch another Activity using an Intent from a Flutter app:
<a href="https://github.com/flutter/flutter/blob/master/examples/widgets/launch_url.dart">https://github.com/flutter/flutter/blob/master/examples/widgets/launch_url.dart</a></p>
<pre><code>import 'package:flutter/widgets.dart';
import 'package:flutter/services.dart';
void main() {
runApp(new GestureDetector(
onTap: () {
Intent intent = new Intent()
..action = 'android.intent.action.VIEW'
..url = 'http://flutter.io/';
activity.startActivity(intent);
},
child: new Container(
decoration: const BoxDecoration(
backgroundColor: const Color(0xFF006600)
),
child: new Center(
child: new Text('Tap to launch a URL!')
)
)
));
}
</code></pre>
<p>But can one do the following with the Flutter Activity Intent services when an Intent is passed to the app?
<a href="http://developer.android.com/training/sharing/receive.html">http://developer.android.com/training/sharing/receive.html</a></p>
<pre><code>. . .
void onCreate (Bundle savedInstanceState) {
...
// Get intent, action and MIME type
Intent intent = getIntent();
. . .
</code></pre>
| 0debug
|
how can a web application verify data from a data base : <p>I am new on JEE I want to create an authentication application that verifies the login and the password in a postgres data base through a servlet any help please</p>
| 0debug
|
What is wp-includes in wordpress and the purpose of it? : <p>I want to know what is the importance of wp-includes in wordpress</p>
| 0debug
|
void helper_fitoq(CPUSPARCState *env, int32_t src)
{
QT0 = int32_to_float128(src, &env->fp_status);
}
| 1threat
|
static void close_slave(TeeSlave *tee_slave)
{
AVFormatContext *avf;
unsigned i;
avf = tee_slave->avf;
for (i = 0; i < avf->nb_streams; ++i) {
AVBitStreamFilterContext *bsf_next, *bsf = tee_slave->bsfs[i];
while (bsf) {
bsf_next = bsf->next;
av_bitstream_filter_close(bsf);
bsf = bsf_next;
}
}
av_freep(&tee_slave->stream_map);
av_freep(&tee_slave->bsfs);
ff_format_io_close(avf, &avf->pb);
avformat_free_context(avf);
tee_slave->avf = NULL;
}
| 1threat
|
static int ulti_decode_frame(AVCodecContext *avctx,
void *data, int *data_size,
AVPacket *avpkt)
{
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
UltimotionDecodeContext *s=avctx->priv_data;
int modifier = 0;
int uniq = 0;
int mode = 0;
int blocks = 0;
int done = 0;
int x = 0, y = 0;
int i;
int skip;
int tmp;
s->frame.reference = 1;
s->frame.buffer_hints = FF_BUFFER_HINTS_VALID | FF_BUFFER_HINTS_PRESERVE | FF_BUFFER_HINTS_REUSABLE;
if (avctx->reget_buffer(avctx, &s->frame) < 0) {
av_log(avctx, AV_LOG_ERROR, "reget_buffer() failed\n");
return -1;
}
while(!done) {
int idx;
if(blocks >= s->blocks || y >= s->height)
break;
idx = *buf++;
if((idx & 0xF8) == 0x70) {
switch(idx) {
case 0x70:
modifier = *buf++;
if(modifier>1)
av_log(avctx, AV_LOG_INFO, "warning: modifier must be 0 or 1, got %i\n", modifier);
break;
case 0x71:
uniq = 1;
break;
case 0x72:
mode = !mode;
break;
case 0x73:
done = 1;
break;
case 0x74:
skip = *buf++;
if ((blocks + skip) >= s->blocks)
break;
blocks += skip;
x += skip * 8;
while(x >= s->width) {
x -= s->width;
y += 8;
}
break;
default:
av_log(avctx, AV_LOG_INFO, "warning: unknown escape 0x%02X\n", idx);
}
} else {
int code;
int cf;
int angle = 0;
uint8_t Y[4];
int tx = 0, ty = 0;
int chroma = 0;
if (mode || uniq) {
uniq = 0;
cf = 1;
chroma = 0;
} else {
cf = 0;
if (idx)
chroma = *buf++;
}
for (i = 0; i < 4; i++) {
code = (idx >> (6 - i*2)) & 3;
if(!code)
continue;
if(cf)
chroma = *buf++;
tx = x + block_coords[i * 2];
ty = y + block_coords[(i * 2) + 1];
switch(code) {
case 1:
tmp = *buf++;
angle = angle_by_index[(tmp >> 6) & 0x3];
Y[0] = tmp & 0x3F;
Y[1] = Y[0];
if (angle) {
Y[2] = Y[0]+1;
if (Y[2] > 0x3F)
Y[2] = 0x3F;
Y[3] = Y[2];
} else {
Y[2] = Y[0];
Y[3] = Y[0];
}
break;
case 2:
if (modifier) {
tmp = bytestream_get_be24(&buf);
Y[0] = (tmp >> 18) & 0x3F;
Y[1] = (tmp >> 12) & 0x3F;
Y[2] = (tmp >> 6) & 0x3F;
Y[3] = tmp & 0x3F;
angle = 16;
} else {
tmp = bytestream_get_be16(&buf);
angle = (tmp >> 12) & 0xF;
tmp &= 0xFFF;
tmp <<= 2;
Y[0] = s->ulti_codebook[tmp];
Y[1] = s->ulti_codebook[tmp + 1];
Y[2] = s->ulti_codebook[tmp + 2];
Y[3] = s->ulti_codebook[tmp + 3];
}
break;
case 3:
if (modifier) {
uint8_t Luma[16];
tmp = bytestream_get_be24(&buf);
Luma[0] = (tmp >> 18) & 0x3F;
Luma[1] = (tmp >> 12) & 0x3F;
Luma[2] = (tmp >> 6) & 0x3F;
Luma[3] = tmp & 0x3F;
tmp = bytestream_get_be24(&buf);
Luma[4] = (tmp >> 18) & 0x3F;
Luma[5] = (tmp >> 12) & 0x3F;
Luma[6] = (tmp >> 6) & 0x3F;
Luma[7] = tmp & 0x3F;
tmp = bytestream_get_be24(&buf);
Luma[8] = (tmp >> 18) & 0x3F;
Luma[9] = (tmp >> 12) & 0x3F;
Luma[10] = (tmp >> 6) & 0x3F;
Luma[11] = tmp & 0x3F;
tmp = bytestream_get_be24(&buf);
Luma[12] = (tmp >> 18) & 0x3F;
Luma[13] = (tmp >> 12) & 0x3F;
Luma[14] = (tmp >> 6) & 0x3F;
Luma[15] = tmp & 0x3F;
ulti_convert_yuv(&s->frame, tx, ty, Luma, chroma);
} else {
tmp = *buf++;
if(tmp & 0x80) {
angle = (tmp >> 4) & 0x7;
tmp = (tmp << 8) + *buf++;
Y[0] = (tmp >> 6) & 0x3F;
Y[1] = tmp & 0x3F;
Y[2] = (*buf++) & 0x3F;
Y[3] = (*buf++) & 0x3F;
ulti_grad(&s->frame, tx, ty, Y, chroma, angle);
} else {
int f0, f1;
f0 = *buf++;
f1 = tmp;
Y[0] = (*buf++) & 0x3F;
Y[1] = (*buf++) & 0x3F;
ulti_pattern(&s->frame, tx, ty, f1, f0, Y[0], Y[1], chroma);
}
}
break;
}
if(code != 3)
ulti_grad(&s->frame, tx, ty, Y, chroma, angle);
}
blocks++;
x += 8;
if(x >= s->width) {
x = 0;
y += 8;
}
}
}
*data_size=sizeof(AVFrame);
*(AVFrame*)data= s->frame;
return buf_size;
}
| 1threat
|
int ff_v4l2_m2m_codec_end(AVCodecContext *avctx)
{
V4L2m2mContext* s = avctx->priv_data;
int ret;
ret = ff_v4l2_context_set_status(&s->output, VIDIOC_STREAMOFF);
if (ret)
av_log(avctx, AV_LOG_ERROR, "VIDIOC_STREAMOFF %s\n", s->output.name);
ret = ff_v4l2_context_set_status(&s->capture, VIDIOC_STREAMOFF);
if (ret)
av_log(avctx, AV_LOG_ERROR, "VIDIOC_STREAMOFF %s\n", s->capture.name);
ff_v4l2_context_release(&s->output);
if (atomic_load(&s->refcount))
av_log(avctx, AV_LOG_ERROR, "ff_v4l2m2m_codec_end leaving pending buffers\n");
ff_v4l2_context_release(&s->capture);
sem_destroy(&s->refsync);
if (close(s->fd) < 0 )
av_log(avctx, AV_LOG_ERROR, "failure closing %s (%s)\n", s->devname, av_err2str(AVERROR(errno)));
s->fd = -1;
return 0;
}
| 1threat
|
PHP convert this timestamp to epoch/unix time (int) : <p>Hi Guys I'm looking to convert this type of datetime-stamp to unix/epoch time in php</p>
<pre><code>2019-08-10D00:00:03.712125000
</code></pre>
<p>Any ideas ?</p>
| 0debug
|
static char *SocketAddress_to_str(const char *prefix, SocketAddress *addr,
bool is_listen, bool is_telnet)
{
switch (addr->type) {
case SOCKET_ADDRESS_KIND_INET:
return g_strdup_printf("%s%s:%s:%s%s", prefix,
is_telnet ? "telnet" : "tcp",
addr->u.inet.data->host,
addr->u.inet.data->port,
is_listen ? ",server" : "");
break;
case SOCKET_ADDRESS_KIND_UNIX:
return g_strdup_printf("%sunix:%s%s", prefix,
addr->u.q_unix.data->path,
is_listen ? ",server" : "");
break;
case SOCKET_ADDRESS_KIND_FD:
return g_strdup_printf("%sfd:%s%s", prefix, addr->u.fd.data->str,
is_listen ? ",server" : "");
break;
default:
abort();
}
}
| 1threat
|
Read from a file which is actively writing : I have two programs one writes to a file continuously and i want the other program to read the file continuously but what is happening is my second program is only reading upto the data point which was written when the second code was executed and stops instead of continuosly reading.Is there any way to achieve the thing, basically i want the output of program1 to be used as input of program2 ,Is there any way to read and write in ram instead of file as disk read costs more time.
| 0debug
|
Spec Flow Test Cases Getting Called Twice : <p>We have written test case using Spec Flow,but when we run them,the test will get called twice?</p>
<p>Any idea,what might be the cause?</p>
<p>Any help appreciated.</p>
<p>Thanks,</p>
| 0debug
|
i want to sort my List view with price but my price object i get with My Web Service ? also this object value similar like "$49.55" same as "$39.55". : Then how can i compare both value?
Which Datatype is used for them?
I get string type data.
How to Parse the data for compare value?
This is my spinner click event. how can i sort upper value with my "JSON" Service Object.
public void onItemSelected(AdapterView<?> parent, View view, int position, long id) {
final TextView selectedText = (TextView) parent.getChildAt(0);
if (selectedText != null) {
selectedText.setTextColor(Color.WHITE);
if (id == 1) {
selectedText.setTextColor(Color.WHITE);
Collections.sort(mlistElectricity, new Comparator<RetailerPlanBean>(){
@Override
public int compare(RetailerPlanBean emp1, RetailerPlanBean emp2) {
return emp1.getmPRICE().compareToIgnoreCase(emp2.getmPRICE());
}
});
mRateLv = (ListView) rootview.findViewById(R.id.find_enery);
mRateLv.setAdapter(new FindRateAdapter(getActivity(), mlistElectricity));
mAllListCountTv.setText(""+mlistElectricity.size()+" LIST");
adapter.notifyDataSetChanged();
}else if (id == 2){
selectedText.setTextColor(Color.WHITE);
Collections.sort(mlistElectricity, new Comparator<RetailerPlanBean>(){
@Override
public int compare(RetailerPlanBean emp1, RetailerPlanBean emp2) {
return emp2.getmPRICE().compareToIgnoreCase(emp1.getmPRICE());
}
});
mRateLv = (ListView) rootview.findViewById(R.id.find_enery);
mRateLv.setAdapter(new FindRateAdapter(getActivity(), mlistElectricity));
mAllListCountTv.setText(""+mlistElectricity.size()+" LIST");
adapter.notifyDataSetChanged();
}else {
selectedText.setTextColor(Color.WHITE);
}
}
}
@Override
public void onNothingSelected(AdapterView<?> parent) {
}
});
| 0debug
|
av_cold int ff_rate_control_init(MpegEncContext *s)
{
RateControlContext *rcc = &s->rc_context;
int i, res;
static const char * const const_names[] = {
"PI",
"E",
"iTex",
"pTex",
"tex",
"mv",
"fCode",
"iCount",
"mcVar",
"var",
"isI",
"isP",
"isB",
"avgQP",
"qComp",
#if 0
"lastIQP",
"lastPQP",
"lastBQP",
"nextNonBQP",
#endif
"avgIITex",
"avgPITex",
"avgPPTex",
"avgBPTex",
"avgTex",
NULL
};
static double (* const func1[])(void *, double) = {
(void *)bits2qp,
(void *)qp2bits,
NULL
};
static const char * const func1_names[] = {
"bits2qp",
"qp2bits",
NULL
};
emms_c();
if (!s->avctx->rc_max_available_vbv_use && s->avctx->rc_buffer_size) {
if (s->avctx->rc_max_rate) {
s->avctx->rc_max_available_vbv_use = av_clipf(s->avctx->rc_max_rate/(s->avctx->rc_buffer_size*get_fps(s->avctx)), 1.0/3, 1.0);
} else
s->avctx->rc_max_available_vbv_use = 1.0;
}
res = av_expr_parse(&rcc->rc_eq_eval,
s->rc_eq ? s->rc_eq : "tex^qComp",
const_names, func1_names, func1,
NULL, NULL, 0, s->avctx);
if (res < 0) {
av_log(s->avctx, AV_LOG_ERROR, "Error parsing rc_eq \"%s\"\n", s->rc_eq);
return res;
}
for (i = 0; i < 5; i++) {
rcc->pred[i].coeff = FF_QP2LAMBDA * 7.0;
rcc->pred[i].count = 1.0;
rcc->pred[i].decay = 0.4;
rcc->i_cplx_sum [i] =
rcc->p_cplx_sum [i] =
rcc->mv_bits_sum[i] =
rcc->qscale_sum [i] =
rcc->frame_count[i] = 1;
rcc->last_qscale_for[i] = FF_QP2LAMBDA * 5;
}
rcc->buffer_index = s->avctx->rc_initial_buffer_occupancy;
if (!rcc->buffer_index)
rcc->buffer_index = s->avctx->rc_buffer_size * 3 / 4;
if (s->flags & CODEC_FLAG_PASS2) {
int i;
char *p;
p = s->avctx->stats_in;
for (i = -1; p; i++)
p = strchr(p + 1, ';');
i += s->max_b_frames;
if (i <= 0 || i >= INT_MAX / sizeof(RateControlEntry))
return -1;
rcc->entry = av_mallocz(i * sizeof(RateControlEntry));
rcc->num_entries = i;
for (i = 0; i < rcc->num_entries; i++) {
RateControlEntry *rce = &rcc->entry[i];
rce->pict_type = rce->new_pict_type = AV_PICTURE_TYPE_P;
rce->qscale = rce->new_qscale = FF_QP2LAMBDA * 2;
rce->misc_bits = s->mb_num + 10;
rce->mb_var_sum = s->mb_num * 100;
}
p = s->avctx->stats_in;
for (i = 0; i < rcc->num_entries - s->max_b_frames; i++) {
RateControlEntry *rce;
int picture_number;
int e;
char *next;
next = strchr(p, ';');
if (next) {
(*next) = 0;
next++;
}
e = sscanf(p, " in:%d ", &picture_number);
assert(picture_number >= 0);
assert(picture_number < rcc->num_entries);
rce = &rcc->entry[picture_number];
e += sscanf(p, " in:%*d out:%*d type:%d q:%f itex:%d ptex:%d mv:%d misc:%d fcode:%d bcode:%d mc-var:%"SCNd64" var:%"SCNd64" icount:%d skipcount:%d hbits:%d",
&rce->pict_type, &rce->qscale, &rce->i_tex_bits, &rce->p_tex_bits,
&rce->mv_bits, &rce->misc_bits,
&rce->f_code, &rce->b_code,
&rce->mc_mb_var_sum, &rce->mb_var_sum,
&rce->i_count, &rce->skip_count, &rce->header_bits);
if (e != 14) {
av_log(s->avctx, AV_LOG_ERROR,
"statistics are damaged at line %d, parser out=%d\n",
i, e);
return -1;
}
p = next;
}
if (init_pass2(s) < 0)
return -1;
if ((s->flags & CODEC_FLAG_PASS2) && s->avctx->rc_strategy == FF_RC_STRATEGY_XVID) {
#if CONFIG_LIBXVID
return ff_xvid_rate_control_init(s);
#else
av_log(s->avctx, AV_LOG_ERROR,
"Xvid ratecontrol requires libavcodec compiled with Xvid support.\n");
return -1;
#endif
}
}
if (!(s->flags & CODEC_FLAG_PASS2)) {
rcc->short_term_qsum = 0.001;
rcc->short_term_qcount = 0.001;
rcc->pass1_rc_eq_output_sum = 0.001;
rcc->pass1_wanted_bits = 0.001;
if (s->avctx->qblur > 1.0) {
av_log(s->avctx, AV_LOG_ERROR, "qblur too large\n");
return -1;
}
if (s->rc_initial_cplx) {
for (i = 0; i < 60 * 30; i++) {
double bits = s->rc_initial_cplx * (i / 10000.0 + 1.0) * s->mb_num;
RateControlEntry rce;
if (i % ((s->gop_size + 3) / 4) == 0)
rce.pict_type = AV_PICTURE_TYPE_I;
else if (i % (s->max_b_frames + 1))
rce.pict_type = AV_PICTURE_TYPE_B;
else
rce.pict_type = AV_PICTURE_TYPE_P;
rce.new_pict_type = rce.pict_type;
rce.mc_mb_var_sum = bits * s->mb_num / 100000;
rce.mb_var_sum = s->mb_num;
rce.qscale = FF_QP2LAMBDA * 2;
rce.f_code = 2;
rce.b_code = 1;
rce.misc_bits = 1;
if (s->pict_type == AV_PICTURE_TYPE_I) {
rce.i_count = s->mb_num;
rce.i_tex_bits = bits;
rce.p_tex_bits = 0;
rce.mv_bits = 0;
} else {
rce.i_count = 0;
rce.i_tex_bits = 0;
rce.p_tex_bits = bits * 0.9;
rce.mv_bits = bits * 0.1;
}
rcc->i_cplx_sum[rce.pict_type] += rce.i_tex_bits * rce.qscale;
rcc->p_cplx_sum[rce.pict_type] += rce.p_tex_bits * rce.qscale;
rcc->mv_bits_sum[rce.pict_type] += rce.mv_bits;
rcc->frame_count[rce.pict_type]++;
get_qscale(s, &rce, rcc->pass1_wanted_bits / rcc->pass1_rc_eq_output_sum, i);
rcc->pass1_wanted_bits += s->bit_rate / get_fps(s->avctx);
}
}
}
return 0;
}
| 1threat
|
opencv - cropping handwritten lines (line segmentation) : <p>I'm trying to build a handwriting recognition system using python and opencv.
The recognition of the characters is not the problem but the segmentation.
I have successfully :</p>
<ul>
<li>segmented a word into single characters</li>
<li>segmented a <strong>single sentence</strong> into words in the required order.</li>
</ul>
<p>But I couldn't segment different lines in the document. I tried sorting the contours (to avoid line segmentation and use only word segmentation) but it didnt work.
I have used the following code to segment words contained in a handwritten document , but it returns the words out-of-order(it returns words in left-to-right sorted manner) :</p>
<pre><code>import cv2
import numpy as np
#import image
image = cv2.imread('input.jpg')
#cv2.imshow('orig',image)
#cv2.waitKey(0)
#grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow('gray',gray)
cv2.waitKey(0)
#binary
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
cv2.imshow('second',thresh)
cv2.waitKey(0)
#dilation
kernel = np.ones((5,5), np.uint8)
img_dilation = cv2.dilate(thresh, kernel, iterations=1)
cv2.imshow('dilated',img_dilation)
cv2.waitKey(0)
#find contours
im2,ctrs, hier = cv2.findContours(img_dilation.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#sort contours
sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
for i, ctr in enumerate(sorted_ctrs):
# Get bounding box
x, y, w, h = cv2.boundingRect(ctr)
# Getting ROI
roi = image[y:y+h, x:x+w]
# show ROI
cv2.imshow('segment no:'+str(i),roi)
cv2.rectangle(image,(x,y),( x + w, y + h ),(90,0,255),2)
cv2.waitKey(0)
cv2.imshow('marked areas',image)
cv2.waitKey(0)
</code></pre>
<p>Please note that i am able to segment all the words here <strong>but they appear out order.Is there any way to sort these contours in order of top to bottom</strong></p>
<p><strong>OR</strong></p>
<p><strong>segment the image into separate lines so that each line can be segmented into words using above code?</strong></p>
| 0debug
|
void ff_avg_h264_qpel16_mc22_msa(uint8_t *dst, const uint8_t *src,
ptrdiff_t stride)
{
avc_luma_mid_and_aver_dst_16x16_msa(src - (2 * stride) - 2,
stride, dst, stride);
}
| 1threat
|
static inline uint64_t vtd_iova_limit(VTDContextEntry *ce)
{
uint32_t ce_agaw = vtd_ce_get_agaw(ce);
return 1ULL << MIN(ce_agaw, VTD_MGAW);
}
| 1threat
|
static uint64_t addrrange_end(AddrRange r)
{
return r.start + r.size;
}
| 1threat
|
Your account already has a signing certificate for this machine but it is not present in your keychain : <p>I get this error, verbatim, when trying to build and install to a device.</p>
<p>This is my environment:</p>
<ul>
<li>Xcode 8</li>
<li>El Capitan</li>
</ul>
<p>Has anyone run into this problem? How did you solve it?</p>
<p>Thank you</p>
| 0debug
|
void microblaze_load_kernel(MicroBlazeCPU *cpu, hwaddr ddr_base,
uint32_t ramsize,
const char *initrd_filename,
const char *dtb_filename,
void (*machine_cpu_reset)(MicroBlazeCPU *))
{
QemuOpts *machine_opts;
const char *kernel_filename;
const char *kernel_cmdline;
const char *dtb_arg;
machine_opts = qemu_get_machine_opts();
kernel_filename = qemu_opt_get(machine_opts, "kernel");
kernel_cmdline = qemu_opt_get(machine_opts, "append");
dtb_arg = qemu_opt_get(machine_opts, "dtb");
if (dtb_arg) {
dtb_filename = dtb_arg;
} else {
dtb_filename = qemu_find_file(QEMU_FILE_TYPE_BIOS, dtb_filename);
}
boot_info.machine_cpu_reset = machine_cpu_reset;
qemu_register_reset(main_cpu_reset, cpu);
if (kernel_filename) {
int kernel_size;
uint64_t entry, low, high;
uint32_t base32;
int big_endian = 0;
#ifdef TARGET_WORDS_BIGENDIAN
big_endian = 1;
#endif
kernel_size = load_elf(kernel_filename, NULL, NULL,
&entry, &low, &high,
big_endian, ELF_MACHINE, 0);
base32 = entry;
if (base32 == 0xc0000000) {
kernel_size = load_elf(kernel_filename, translate_kernel_address,
NULL, &entry, NULL, NULL,
big_endian, ELF_MACHINE, 0);
}
boot_info.bootstrap_pc = ddr_base + (entry & 0x0fffffff);
if (kernel_size < 0) {
hwaddr uentry, loadaddr;
kernel_size = load_uimage(kernel_filename, &uentry, &loadaddr, 0);
boot_info.bootstrap_pc = uentry;
high = (loadaddr + kernel_size + 3) & ~3;
}
if (kernel_size < 0) {
kernel_size = load_image_targphys(kernel_filename, ddr_base,
ram_size);
boot_info.bootstrap_pc = ddr_base;
high = (ddr_base + kernel_size + 3) & ~3;
}
if (initrd_filename) {
int initrd_size;
uint32_t initrd_offset;
high = ROUND_UP(high + kernel_size, 4);
boot_info.initrd_start = high;
initrd_offset = boot_info.initrd_start - ddr_base;
initrd_size = load_ramdisk(initrd_filename,
boot_info.initrd_start,
ram_size - initrd_offset);
if (initrd_size < 0) {
initrd_size = load_image_targphys(initrd_filename,
boot_info.initrd_start,
ram_size - initrd_offset);
}
if (initrd_size < 0) {
error_report("qemu: could not load initrd '%s'\n",
initrd_filename);
exit(EXIT_FAILURE);
}
boot_info.initrd_end = boot_info.initrd_start + initrd_size;
high = ROUND_UP(high + initrd_size, 4);
}
boot_info.cmdline = high + 4096;
if (kernel_cmdline && strlen(kernel_cmdline)) {
pstrcpy_targphys("cmdline", boot_info.cmdline, 256, kernel_cmdline);
}
boot_info.fdt = boot_info.cmdline + 4096;
microblaze_load_dtb(boot_info.fdt, ram_size,
boot_info.initrd_start,
boot_info.initrd_end,
kernel_cmdline,
dtb_filename);
}
}
| 1threat
|
How I can split string by comma and/or new line and/or whitespace : <p>Hi there I am searching for regex to split emails, but no success so far.
What is the point
I want to make possible to separate this:</p>
<pre><code>o@gmail.com b@gmail.com c@gmail.om
</code></pre>
<p>or </p>
<pre><code>o@gmail.com, b@gmail.com,c@gmail.com
</code></pre>
<p>or </p>
<pre><code>o@gmail.com,
b@gmail.com, c@gmail.com
</code></pre>
| 0debug
|
Broadcast receiver does not reaceive sms in android? : I am using broadcast receiver in my app to read Otp sent from server,I did not mentioned any permission in menifest.xml . but it does not read OTP .I don't know where is the problem kindly rectify it please.please help me.
public BroadcastReceiver br = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
final Bundle bundle = intent.getExtras();
try {
if (bundle != null) {
Object[] pdusObj = (Object[]) bundle.get("pdus");
assert pdusObj != null;
for (Object aPdusObj : pdusObj) {
@SuppressWarnings("deprecation") SmsMessage currentMessage = SmsMessage.createFromPdu((byte[]) aPdusObj);
String phoneNumber = currentMessage.getDisplayOriginatingAddress();
String message = currentMessage.getDisplayMessageBody();
Log.e(s_szTAG, "Received SMS: " + message + ", Sender: " + phoneNumber);
// checking sms sender address....
if (phoneNumber.toLowerCase().contains("+919971599909".toLowerCase())) {
// verification code from sms
m_szOtpCode = getVerificationCode(message);
assert m_szOtpCode != null;
String input = m_szOtpCode.trim();
Log.e(s_szTAG, "OTP received: " + m_szOtpCode);
COTPVerificationDataStorage.getInstance().setM_szOtp(input);// getting otp from SMS and set to otpverificationstorage class
} else {
return;
}
}
}
} catch (Exception e) {
Log.e(s_szTAG, "Exception: " + e.getMessage());
}
}
@SuppressWarnings("JavaDoc")
private String getVerificationCode(String message) {
String code;
int index = message.indexOf(":");
if (index != -1) {
int start = index + 2;
int length = 6;
code = message.substring(start, start + length);
return code;
}
COTPVerificationDataStorage.getInstance().setM_szOtp(m_szOtpCode);
return null;
}
};
private IntentFilter inf;
@Nullable
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
m_Main = inflater.inflate(R.layout.otp_auto_verified, container, false);
inf = new IntentFilter();
inf.addAction("android.provider.Telephony.SMS_RECEIVED");
getUserDetails();// getuser deatails....
init();// initialize controls...
return m_Main;
}
| 0debug
|
How to create a chart like this ? what library should I use? : <p>What library should I use to create a chart like the one in the image? Sorry that I posted this here but I don't know where to post this.
<a href="https://i.stack.imgur.com/SzCvp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SzCvp.jpg" alt="enter image description here"></a></p>
| 0debug
|
Recurring error using lmer() function for a linear mixed-effects model in R : I am attempting to run a linear-mixed effects model in R using the lmer() function from the lme4 library and am running into a recurring error. I am attempting to run the model with two fixed effects: DBS_Electrode (factor w/3 levels) and PostOp_ICA (continuous variable). I am using (1 | Subject) as the random effect term in which Subject is a factor of 38 levels (38 total subjects). Below is the line of code I am attempting to run:
LMM.DBS <- lmer(Distal_Lead_Migration ~ DBS_Electrode + PostOp_ICA + (1 | Subject), data = DBS)
I am receiving the recurring error "number of levels of each grouping factor must be < number of observations." I would appreciate any help, I have tried to navigate this issue myself and have been unsuccessful.
| 0debug
|
static bool send_gradient_rect(VncState *vs, int w, int h)
{
int stream = 3;
int level = tight_conf[vs->tight_compression].gradient_zlib_level;
size_t bytes;
if (vs->clientds.pf.bytes_per_pixel == 1)
return send_full_color_rect(vs, w, h);
vnc_write_u8(vs, (stream | VNC_TIGHT_EXPLICIT_FILTER) << 4);
vnc_write_u8(vs, VNC_TIGHT_FILTER_GRADIENT);
buffer_reserve(&vs->tight_gradient, w * 3 * sizeof (int));
if (vs->tight_pixel24) {
tight_filter_gradient24(vs, vs->tight.buffer, w, h);
bytes = 3;
} else if (vs->clientds.pf.bytes_per_pixel == 4) {
tight_filter_gradient32(vs, (uint32_t *)vs->tight.buffer, w, h);
bytes = 4;
} else {
tight_filter_gradient16(vs, (uint16_t *)vs->tight.buffer, w, h);
bytes = 2;
}
buffer_reset(&vs->tight_gradient);
bytes = w * h * bytes;
vs->tight.offset = bytes;
bytes = tight_compress_data(vs, stream, bytes,
level, Z_FILTERED);
return (bytes >= 0);
}
| 1threat
|
How to Create Array inot JSON Format : For Exampale:
```var arr = ["tag1, tag2"]```
This above array I want to in JSON format for example:
```var arr = [
`{"name": "tag1"},`
`{"name": "tag2"}`
]
```
How to do this please anyone helps me out this problem.
| 0debug
|
MS SQL - Name concatenation - Multiple Users : I am trying to concatenate multiple users into a singular string. Here is an example of my DB Model:
dbo.Users
| UserId | AccountId | Title | FirstName | LastName |
| 1234 | 1001 | Mr | John | Banks |
| 1235 | 1001 | Mrs | Georgia | Banks |
| 1236 | 1002 | Mr | Chris | Aims |
| 1237 | 1002 | Mrs | Caroline | Hole |
dbo.Account
| AccountId | SignUpDate | LastLoginDate |
| 1001 | 20/08/2017 | 13/06/2018 |
| 1002 | 20/08/2017 | 13/06/2018 |
I want to be able to get these users in a string like this:
Account 1001:
Mr J & Mrs G Banks
Account 1002:
Mr C Aims & Mrs C Hole
Can anyone make any suggestions?
Thanks
| 0debug
|
How to define the swift version for a specific pod in a Podfile : <p>Is it possible to set swift version compiler to version 3.0 for the pod named 'SideMenuController' in the Podfile below? If yes, then how to do it?</p>
<pre><code>use_frameworks!
platform :ios, '10.0'
def shared_pods
pod 'Alamofire', '4.6.0'
pod 'SideMenuController', '0.2.4'
end
</code></pre>
| 0debug
|
Angular 4, How to update [(ngModel)] with a delay of 1 seconds : <p>Since ngModel is updating instantly how to put a delay.</p>
<pre><code> <input type="text" value="{{item.task_name}}" name="task_name" [(ngModel)]="item.task_name" (ngModelChange)="update_fields([item.task_name])" >
</code></pre>
<p>I need to save the task_name with a delay of one seconds by calling update_fields() , To avoid instant calls to service. </p>
<p>Thanks</p>
| 0debug
|
Angular - Type 'string' is not assignable to type 'boolean' : <p>Angular 4.3.1<br>
Angular CLI 1.2.3<br>
Typescript 2.3.4</p>
<p>Component Typescript file:</p>
<pre><code>public saveName: string;
public overwrite: boolean;
</code></pre>
<p>The following markup fails with <strong>Type 'string' is not assignable to type 'boolean'</strong> when I run <code>ng build --prod</code></p>
<pre><code><span>{{!overwrite || saveName}}</span>
OR
<button *ngIf="!overwrite && saveName">Save</button>
</code></pre>
<p>However, it works fine with the following:</p>
<pre><code><span>{{saveName || !overwrite}}</span>
<span>{{overwrite || saveName}}</span>
<button *ngIf="saveName && !overwrite">Save</button>
<button *ngIf="overwrite && saveName">Save</button>
</code></pre>
<p>Why am I getting that error?<br>
More specifically, why does that error only show up when I have a negated boolean come before a string?</p>
| 0debug
|
What authentication method am I using? :-) : I am connecting to Microsoft IIS server from php-script using curl to send some json data, everything is working fine. But i have theoretical question - what type of authentification am I using? :-)
Code sample is below:
$curlheader = array("Content-type: application/json", "Auth: N@0062Ibb$#=="); //"Auth:" is changed
$url = "http://xxx.xxx.xxx.xxx:81/feed/ProdICCFeeds/"; //ip is also changed
curl_setopt($curl, CURLOPT_URL, $url); //set url
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_POST, true); //http POST
curl_setopt($curl, CURLOPT_HTTPHEADER, $curlheader); //set header
$content = json_encode($aParamsArray); //set paameters, not here
curl_setopt($curl, CURLOPT_POSTFIELDS, $content);
$json_response = curl_exec($curl);
$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
| 0debug
|
PHPUnit mock function? : <p>I have an interesting scenario in that I need a function to be defined in order to make tests for another function. The function I want to test looks something like this:</p>
<pre><code>if (function_exists('foo') && ! function_exists('baz')) {
/**
* Baz function
*
* @param integer $n
* @return integer
*/
function baz($n)
{
return foo() + $n;
}
}
</code></pre>
<p>The reason I am checking for the existence of <code>foo</code> is because it may or may not be defined in a developer's project and the function <code>baz</code> relies on <code>foo</code>. Because of this, I only want <code>baz</code> to be defined if it can call <code>foo</code>. </p>
<p>The only problem is that so far it has been impossible to write tests for. I tried creating a bootstrap script in the PHPUnit configuration that would define a fake <code>foo</code> function and then require the Composer autoloader, but my main script still thinks <code>foo</code> is not defined. <code>foo</code> is not a Composer package and can not otherwise be required by my project. Obviously Mockery will not work for this either. My question is if anyone more experienced with PHPUnit has come across this issue and found a solution.</p>
<p>Thanks!</p>
| 0debug
|
How to load chosen alternative css as default css? : <p>I made a light and dark css for my web. Light css as default and dark css as alternative. It works perfectly with the help of JavaScript. Now the problem is, if a user select dark version and refresh the page or go to next page, it goes back to light css again. I need my web to load dark css as default css if a user select dark css. How will be it made? Is it with the help of JavaScript? If yes please give the codes too.</p>
<p>Thanks</p>
| 0debug
|
static void av_always_inline filter_mb_edgev( uint8_t *pix, int stride, const int16_t bS[4], unsigned int qp, H264Context *h, int intra ) {
const int qp_bd_offset = 6 * (h->sps.bit_depth_luma - 8);
const unsigned int index_a = qp - qp_bd_offset + h->slice_alpha_c0_offset;
const int alpha = alpha_table[index_a];
const int beta = beta_table[qp - qp_bd_offset + h->slice_beta_offset];
if (alpha ==0 || beta == 0) return;
if( bS[0] < 4 || !intra ) {
int8_t tc[4];
tc[0] = tc0_table[index_a][bS[0]];
tc[1] = tc0_table[index_a][bS[1]];
tc[2] = tc0_table[index_a][bS[2]];
tc[3] = tc0_table[index_a][bS[3]];
h->h264dsp.h264_h_loop_filter_luma(pix, stride, alpha, beta, tc);
} else {
h->h264dsp.h264_h_loop_filter_luma_intra(pix, stride, alpha, beta);
}
}
| 1threat
|
What is $ symbol used for in jsp? : <p>I am new to JavaServer Page(jsp) and have been researching this matter, but I dont get what is the symbol $ mean.Any resources or readings regarding this would be appreciated. Thanks.</p>
| 0debug
|
Delete an image based on URL on WEB : <p>I cannot find a function like 'getReferenceFromUrl' on the web Firebase. I have stored a reference to the URL (and not the name of the image) for each item in my database. </p>
<p>Is there any workaround to get the reference on the image with the URL? </p>
| 0debug
|
def sum_Range_list(nums, m, n):
sum_range = 0
for i in range(m, n+1, 1):
sum_range += nums[i]
return sum_range
| 0debug
|
static int pxa2xx_ssp_load(QEMUFile *f, void *opaque, int version_id)
{
PXA2xxSSPState *s = (PXA2xxSSPState *) opaque;
int i;
s->enable = qemu_get_be32(f);
qemu_get_be32s(f, &s->sscr[0]);
qemu_get_be32s(f, &s->sscr[1]);
qemu_get_be32s(f, &s->sspsp);
qemu_get_be32s(f, &s->ssto);
qemu_get_be32s(f, &s->ssitr);
qemu_get_be32s(f, &s->sssr);
qemu_get_8s(f, &s->sstsa);
qemu_get_8s(f, &s->ssrsa);
qemu_get_8s(f, &s->ssacd);
s->rx_level = qemu_get_byte(f);
s->rx_start = 0;
for (i = 0; i < s->rx_level; i ++)
s->rx_fifo[i] = qemu_get_byte(f);
return 0;
}
| 1threat
|
Precision issues with dlmwrite / dlmread : <p>I recently discovered, quite harshly, that Matlab's <code>dlmread</code> and <code>dlmwrite</code> don't store numerical values at <code>double</code> accuracy. It effects my code, and I need to store big arrays with more precision.</p>
<p>A (not) working example : </p>
<pre><code>pi1 = pi;
dlmwrite('pi',pi1);
pi2 = dlmread('pi');
pi1-pi2
ans =
-7.3464e-06
</code></pre>
<p>While I'd expect machine-error answer, of 10^-14 accuracy.</p>
<p>I'd much rather keep using a simple function as <code>dlmwrite</code>, but I will consider other solutions.</p>
<p>Thanks</p>
| 0debug
|
Pandas how to use pd.cut() : <p>Here is the snippet:</p>
<pre><code>test = pd.DataFrame({'days': [0,31,45]})
test['range'] = pd.cut(test.days, [0,30,60])
</code></pre>
<p>Output:</p>
<pre><code> days range
0 0 NaN
1 31 (30, 60]
2 45 (30, 60]
</code></pre>
<p>I am surprised that 0 is not in (0, 30], what should I do to categorize 0 as (0, 30]?</p>
| 0debug
|
WARNING in budgets, maximum exceeded for initial : <p>When build my angular 7 project with --prod, i have a warning in budgets.</p>
<p>I have a angular 7 project, i want to build it, but i have a warning:</p>
<pre><code>WARNING in budgets, maximum exceeded for initial. Budget 2 MB was exceeded by 1.77 MB
</code></pre>
<p>these are chunk details:</p>
<pre><code>chunk {scripts} scripts.2cc9101aa9ed72da1ec4.js (scripts) 154 kB [rendered]
chunk {0} runtime.ec2944dd8b20ec099bf3.js (runtime) 1.41 kB [entry] [rendered]
chunk {1} main.13d1eb792af7c2f359ed.js (main) 3.34 MB [initial] [rendered]
chunk {2} polyfills.11b1e0c77d01e41acbba.js (polyfills) 58.2 kB [initial] [rendered]
chunk {3} styles.33b11ad61bf10bb992bb.css (styles) 379 kB [initial] [rendered]
</code></pre>
<p>what exactly are budgets? and how should i manage them?</p>
| 0debug
|
function won't run the second part of the code how do i fix this? : Here is my code
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
function CheckForm_1(){
var F1 = document.getElementById('srch');
var Lgth = document.getElementById('srch');
if(F1.value == ""){
document.getElementById("Empty_Err");
Empty_Err.style.display = 'inline';
Empty_Err.stylebackgroundColor = 'linen';
return false;
}else{
return true;
}
if(Lgth.value.length > 17 || Lgth.value.length < 4){
document.getElementById("Length_Err");
Length_Err.style.display = 'inline';
backgroundColor = 'linen';
alert("Search length must be be between 4 and 17");
return false;
}else{
return true;
}
}
<!-- end snippet -->
I can't get it to check the field length Any ideas on this?
The code will run a check for an empty field just fine.
| 0debug
|
Simple one class program java intro trouble : <p>I've been working on this for hours, and while this is due tonight I got my wisdom teeth out today and the anesthesia is making me really easily confused.</p>
<p>I need is two methods within the class, "toString", which takes dd/mm/yyyy and prints that, as well as "advance" which modifies the day + 1.
When I check the modified date, I receive this: </p>
<pre><code>Initial date: 88/8/8888
Modified date: 88/0/8888
int day, month, year, newDay;
String decision, dummy ;
Scanner read = new Scanner(System.in);
public static void main(String[] args) {
Date dateInstance = new Date();
dateInstance.toString();
dateInstance.advance();
}
public String toString() {
System.out.println("Enter day (mm/xx/yyyy): ");
day = read.nextInt();
System.out.println("Enter month (xx/dd/yyyy): ");
month = read.nextInt();
System.out.println("Enter year (mm/dd/xxxx): ");
year = read.nextInt();
System.out.println("Initial date: "+month+"/"+day+"/"+year);
System.out.println("Modified date: "+month+"/"+newDay+"/"+year);
return null;
/*
String decision = read.nextLine();
System.out.println("Would you like to display the date, and the modified date? (Y / N): ");
if(decision == "N") {
System.out.println("'N' Selected");
}else if(decision == "Y") {
System.out.println("Initial date: "+month+"/"+day+"/"+year);
System.out.println("Modified date: "+month+"/"+newDay+"/"+year);
}
return dummy;
*/
}
public int advance() {
newDay = day + 1;
return newDay;
}
</code></pre>
| 0debug
|
QML Canvas: different behaviour in rendering : <p>I am trying to draw an annulus sector in QML using the Canvas object.
First, I have written the javascript code, and I have verified that it is correct by executing it in a browser.</p>
<p>Here it is:</p>
<pre><code>var can = document.getElementById('myCanvas');
var ctx=can.getContext("2d");
var center = {
x: can.width / 2,
y: can.height / 2
};
var minRad = 100;
var maxRad = 250;
var startAngle = toRad(290);
var endAngle = toRad(310);
drawAxis();
drawSector();
function drawSector() {
var p1 = {
x: maxRad * Math.cos(startAngle),
y: maxRad * Math.sin(startAngle)
}
p1 = toCanvasSpace(p1);
var p2 = {
x: minRad * Math.cos(startAngle),
y: minRad * Math.sin(startAngle)
}
p2 = toCanvasSpace(p2);
var p3 = {
x: minRad * Math.cos(endAngle),
y: minRad * Math.sin(endAngle)
}
p3 = toCanvasSpace(p3);
var p4 = {
x: maxRad * Math.cos(endAngle),
y: maxRad * Math.sin(endAngle)
}
p4 = toCanvasSpace(p4);
ctx.beginPath();
ctx.moveTo(p1.x, p1.y);
ctx.arc(center.x, center.y, maxRad, startAngle, endAngle);
ctx.lineTo(p3.x, p3.y);
ctx.arc(center.x, center.y, minRad, endAngle, startAngle, true);
ctx.closePath();
ctx.strokeStyle = "blue";
ctx.lineWidth = 2;
ctx.stroke();
}
function drawAxis() {
ctx.beginPath();
ctx.moveTo(can.width / 2, 0);
ctx.lineTo(can.width / 2, can.height);
ctx.stroke();
ctx.beginPath();
ctx.moveTo(0, can.height / 2);
ctx.lineTo(can.width, can.height / 2);
ctx.stroke();
}
function toRad(degrees) {
return degrees * Math.PI / 180;
}
function toCanvasSpace(p) {
var ret = {};
ret.x = p.x + can.width / 2;
ret.y = p.y + can.height / 2;
return ret;
}
</code></pre>
<p><a href="https://jsfiddle.net/0kwauehj/7/" rel="nofollow noreferrer">Here</a> you can run the code above.
The output is this:</p>
<p><a href="https://i.stack.imgur.com/bmjS0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bmjS0.png" alt="enter image description here"></a></p>
<p>Next, I moved the same code into a Canvas object in Qml.</p>
<p>See here the main.qml containing the Canvas:</p>
<pre><code>import QtQuick 2.5
import QtQuick.Window 2.2
Window {
visible: true
width: 500
height: 500
x:500
Canvas
{
id: can
anchors.fill: parent
antialiasing: true
onPaint: {
var ctx=can.getContext("2d");
var center = {
x: can.width / 2,
y: can.height / 2
};
var minRad = 100;
var maxRad = 250;
var startAngle = toRad(290);
var endAngle = toRad(310);
drawAxis();
drawSector();
function drawSector() {
var p1 = {
x: maxRad * Math.cos(startAngle),
y: maxRad * Math.sin(startAngle)
}
p1=toCanvasSpace(p1);
var p2 = {
x: minRad * Math.cos(startAngle),
y: minRad * Math.sin(startAngle)
}
p2=toCanvasSpace(p2);
var p3 = {
x: minRad * Math.cos(endAngle),
y: minRad * Math.sin(endAngle)
}
p3=toCanvasSpace(p3);
var p4 = {
x: maxRad * Math.cos(endAngle),
y: maxRad * Math.sin(endAngle)
}
p4=toCanvasSpace(p4);
ctx.beginPath();
ctx.moveTo(p1.x, p1.y);
ctx.arc(center.x, center.y, maxRad, startAngle, endAngle);
ctx.lineTo(p3.x, p3.y);
ctx.arc(center.x, center.y, minRad, endAngle, startAngle, true);
ctx.closePath();
ctx.strokeStyle="blue";
ctx.lineWidth=2;
ctx.stroke();
}
function drawAxis() {
ctx.beginPath();
ctx.moveTo(can.width / 2, 0);
ctx.lineTo(can.width / 2, can.height);
ctx.stroke();
ctx.beginPath();
ctx.moveTo(0, can.height / 2);
ctx.lineTo(can.width, can.height / 2);
ctx.stroke();
}
function toRad(degrees) {
return degrees * Math.PI / 180;
}
function toCanvasSpace(p) {
var ret = {};
ret.x = p.x + can.width / 2;
ret.y = p.y + can.height / 2;
return ret;
}
}
}
}
</code></pre>
<p>In this case I get this output:</p>
<p><a href="https://i.stack.imgur.com/93Ns0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/93Ns0.jpg" alt="enter image description here"></a></p>
<p>As you can see there is an imperfection at the bottom.</p>
<p>I really don't understand why there is that imperfection; moreover I don't understand why the same code gives different output.</p>
<p>Any help is appreciated!
Thanks</p>
| 0debug
|
Finding tokens within a string using a searching loop JAVA : <p>So basically I'm trying to write a method that takes 2 strings as parameters and returns a boolean. It returns true if t occurs as a token within s, and returns false if otherwise. </p>
<p>I'm pretty new to coding so I don't really know what i'm doing wrong, any help is appreciated! </p>
<p>Here's my code:</p>
<pre><code>public static boolean containsToken(String s, String t) {
Scanner scr = new Scanner(s);
</code></pre>
<p>This scanner breaks up String s into tokens</p>
<pre><code>for(int i = 0; i < s.length(); i++) {
</code></pre>
<p>I tried to make this for loop search through the length of s for tokens that match up with String t </p>
<pre><code>if(t.contains(scr.next()))
return true;
}
return false;
}
</code></pre>
| 0debug
|
static int mov_read_hdlr(MOVContext *c, ByteIOContext *pb, MOV_atom_t atom)
{
AVStream *st = c->fc->streams[c->fc->nb_streams-1];
int len = 0;
uint8_t *buf;
uint32_t type;
uint32_t ctype;
print_atom("hdlr", atom);
get_byte(pb);
get_byte(pb); get_byte(pb); get_byte(pb);
ctype = get_le32(pb);
type = get_le32(pb);
#ifdef DEBUG
printf("ctype= %c%c%c%c (0x%08lx)\n", *((char *)&ctype), ((char *)&ctype)[1], ((char *)&ctype)[2], ((char *)&ctype)[3], (long) ctype);
printf("stype= %c%c%c%c\n", *((char *)&type), ((char *)&type)[1], ((char *)&type)[2], ((char *)&type)[3]);
#endif
#ifdef DEBUG
if(ctype == MKTAG('m', 'h', 'l', 'r')) {
if(type == MKTAG('v', 'i', 'd', 'e'))
puts("hdlr: vide");
else if(type == MKTAG('s', 'o', 'u', 'n'))
puts("hdlr: soun");
} else if(ctype == 0) {
if(type == MKTAG('v', 'i', 'd', 'e'))
puts("hdlr: vide");
else if(type == MKTAG('s', 'o', 'u', 'n'))
puts("hdlr: soun");
else if(type == MKTAG('o', 'd', 's', 'm'))
puts("hdlr: odsm");
else if(type == MKTAG('s', 'd', 's', 'm'))
puts("hdlr: sdsm");
} else puts("hdlr: meta");
#endif
if(ctype == MKTAG('m', 'h', 'l', 'r')) {
c->mp4 = 0;
if(type == MKTAG('v', 'i', 'd', 'e'))
st->codec.codec_type = CODEC_TYPE_VIDEO;
else if(type == MKTAG('s', 'o', 'u', 'n'))
st->codec.codec_type = CODEC_TYPE_AUDIO;
} else if(ctype == 0) {
c->mp4 = 1;
if(type == MKTAG('v', 'i', 'd', 'e'))
st->codec.codec_type = CODEC_TYPE_VIDEO;
else if(type == MKTAG('s', 'o', 'u', 'n'))
st->codec.codec_type = CODEC_TYPE_AUDIO;
}
get_be32(pb);
get_be32(pb);
get_be32(pb);
if(atom.size <= 24)
return 0;
if(c->mp4) {
while(get_byte(pb) && (++len < (atom.size - 24)));
} else {
len = get_byte(pb);
#ifdef DEBUG
buf = (uint8_t*) av_malloc(len+1);
if (buf) {
get_buffer(pb, buf, len);
buf[len] = '\0';
printf("**buf='%s'\n", buf);
av_free(buf);
} else
#endif
url_fskip(pb, len);
}
return 0;
}
| 1threat
|
static int m25p80_init(SSISlave *ss)
{
DriveInfo *dinfo;
Flash *s = M25P80(ss);
M25P80Class *mc = M25P80_GET_CLASS(s);
s->pi = mc->pi;
s->size = s->pi->sector_size * s->pi->n_sectors;
s->dirty_page = -1;
s->storage = blk_blockalign(s->blk, s->size);
dinfo = drive_get_next(IF_MTD);
if (dinfo) {
DB_PRINT_L(0, "Binding to IF_MTD drive\n");
s->blk = blk_by_legacy_dinfo(dinfo);
blk_attach_dev_nofail(s->blk, s);
if (blk_read(s->blk, 0, s->storage,
DIV_ROUND_UP(s->size, BDRV_SECTOR_SIZE))) {
fprintf(stderr, "Failed to initialize SPI flash!\n");
return 1;
}
} else {
DB_PRINT_L(0, "No BDRV - binding to RAM\n");
memset(s->storage, 0xFF, s->size);
}
return 0;
}
| 1threat
|
static void versatile_init(MachineState *machine, int board_id)
{
ObjectClass *cpu_oc;
Object *cpuobj;
ARMCPU *cpu;
MemoryRegion *sysmem = get_system_memory();
MemoryRegion *ram = g_new(MemoryRegion, 1);
qemu_irq pic[32];
qemu_irq sic[32];
DeviceState *dev, *sysctl;
SysBusDevice *busdev;
DeviceState *pl041;
PCIBus *pci_bus;
NICInfo *nd;
I2CBus *i2c;
int n;
int done_smc = 0;
DriveInfo *dinfo;
if (!machine->cpu_model) {
machine->cpu_model = "arm926";
cpu_oc = cpu_class_by_name(TYPE_ARM_CPU, machine->cpu_model);
if (!cpu_oc) {
fprintf(stderr, "Unable to find CPU definition\n");
cpuobj = object_new(object_class_get_name(cpu_oc));
memory_region_add_subregion(sysmem, 0, ram);
sysctl = qdev_create(NULL, "realview_sysctl");
qdev_prop_set_uint32(sysctl, "sys_id", 0x41007004);
qdev_prop_set_uint32(sysctl, "proc_id", 0x02000000);
qdev_init_nofail(sysctl);
sysbus_mmio_map(SYS_BUS_DEVICE(sysctl), 0, 0x10000000);
dev = sysbus_create_varargs("pl190", 0x10140000,
qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_IRQ),
qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_FIQ),
NULL);
for (n = 0; n < 32; n++) {
pic[n] = qdev_get_gpio_in(dev, n);
dev = sysbus_create_simple(TYPE_VERSATILE_PB_SIC, 0x10003000, NULL);
for (n = 0; n < 32; n++) {
sysbus_connect_irq(SYS_BUS_DEVICE(dev), n, pic[n]);
sic[n] = qdev_get_gpio_in(dev, n);
sysbus_create_simple("pl050_keyboard", 0x10006000, sic[3]);
sysbus_create_simple("pl050_mouse", 0x10007000, sic[4]);
dev = qdev_create(NULL, "versatile_pci");
busdev = SYS_BUS_DEVICE(dev);
qdev_init_nofail(dev);
sysbus_mmio_map(busdev, 0, 0x10001000);
sysbus_mmio_map(busdev, 1, 0x41000000);
sysbus_mmio_map(busdev, 2, 0x42000000);
sysbus_mmio_map(busdev, 3, 0x43000000);
sysbus_mmio_map(busdev, 4, 0x44000000);
sysbus_mmio_map(busdev, 5, 0x50000000);
sysbus_mmio_map(busdev, 6, 0x60000000);
sysbus_connect_irq(busdev, 0, sic[27]);
sysbus_connect_irq(busdev, 1, sic[28]);
sysbus_connect_irq(busdev, 2, sic[29]);
sysbus_connect_irq(busdev, 3, sic[30]);
pci_bus = (PCIBus *)qdev_get_child_bus(dev, "pci");
for(n = 0; n < nb_nics; n++) {
nd = &nd_table[n];
if (!done_smc && (!nd->model || strcmp(nd->model, "smc91c111") == 0)) {
smc91c111_init(nd, 0x10010000, sic[25]);
done_smc = 1;
} else {
pci_nic_init_nofail(nd, pci_bus, "rtl8139", NULL);
if (machine_usb(machine)) {
pci_create_simple(pci_bus, -1, "pci-ohci");
n = drive_get_max_bus(IF_SCSI);
while (n >= 0) {
pci_create_simple(pci_bus, -1, "lsi53c895a");
n--;
pl011_create(0x101f1000, pic[12], serial_hds[0]);
pl011_create(0x101f2000, pic[13], serial_hds[1]);
pl011_create(0x101f3000, pic[14], serial_hds[2]);
pl011_create(0x10009000, sic[6], serial_hds[3]);
sysbus_create_simple("pl080", 0x10130000, pic[17]);
sysbus_create_simple("sp804", 0x101e2000, pic[4]);
sysbus_create_simple("sp804", 0x101e3000, pic[5]);
sysbus_create_simple("pl061", 0x101e4000, pic[6]);
sysbus_create_simple("pl061", 0x101e5000, pic[7]);
sysbus_create_simple("pl061", 0x101e6000, pic[8]);
sysbus_create_simple("pl061", 0x101e7000, pic[9]);
dev = sysbus_create_simple("pl110_versatile", 0x10120000, pic[16]);
qdev_connect_gpio_out(sysctl, 0, qdev_get_gpio_in(dev, 0));
sysbus_create_varargs("pl181", 0x10005000, sic[22], sic[1], NULL);
sysbus_create_varargs("pl181", 0x1000b000, sic[23], sic[2], NULL);
sysbus_create_simple("pl031", 0x101e8000, pic[10]);
dev = sysbus_create_simple("versatile_i2c", 0x10002000, NULL);
i2c = (I2CBus *)qdev_get_child_bus(dev, "i2c");
i2c_create_slave(i2c, "ds1338", 0x68);
pl041 = qdev_create(NULL, "pl041");
qdev_prop_set_uint32(pl041, "nc_fifo_depth", 512);
qdev_init_nofail(pl041);
sysbus_mmio_map(SYS_BUS_DEVICE(pl041), 0, 0x10004000);
sysbus_connect_irq(SYS_BUS_DEVICE(pl041), 0, sic[24]);
dinfo = drive_get(IF_PFLASH, 0, 0);
if (!pflash_cfi01_register(VERSATILE_FLASH_ADDR, NULL, "versatile.flash",
VERSATILE_FLASH_SIZE,
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
VERSATILE_FLASH_SECT_SIZE,
VERSATILE_FLASH_SIZE / VERSATILE_FLASH_SECT_SIZE,
4, 0x0089, 0x0018, 0x0000, 0x0, 0)) {
fprintf(stderr, "qemu: Error registering flash memory.\n");
versatile_binfo.ram_size = machine->ram_size;
versatile_binfo.kernel_filename = machine->kernel_filename;
versatile_binfo.kernel_cmdline = machine->kernel_cmdline;
versatile_binfo.initrd_filename = machine->initrd_filename;
versatile_binfo.board_id = board_id;
arm_load_kernel(cpu, &versatile_binfo);
| 1threat
|
def clear_tuple(test_tup):
temp = list(test_tup)
temp.clear()
test_tup = tuple(temp)
return (test_tup)
| 0debug
|
How To check object Names In Loops : <p>I have 31 numericipdowns (nup1,nup2,nup3....nup31), I want to add this numericupdowns values in datagridview.i used "For" loop and "switch", now i want to make something like this:</p>
<pre><code>For(int i=1;i<32;i==){
if(nup+i.value>0){
dataGridView1.Rows.Add((nup+i).ToString(0)
}
}
</code></pre>
<p>anybody can to help me?</p>
| 0debug
|
Hyperledger Fabric Composer - restricting access rights of system administrators : <p>My question is on access control in hyperledger fabric composer.</p>
<p>Assume you have a business network, in which you have the following participants:</p>
<ol>
<li>Sellers</li>
<li>(Potential) Buyers </li>
</ol>
<p>A seller is an employee of a company that sells products to a buying company. A buyer is an employee of a buying company.</p>
<p>Example:
The buying company is Daimler. Three employees of Daimler are registered as Buyers in the network.
The selling company is General Electric. Two employees of General Electric are registered as Sellers in the network.</p>
<p>With hyperledger composer's Access Control Language, one can restrict the access rights of buyers and sellers at will.</p>
<p><strong>But how is the situation regarding Access Control at the Node level?</strong></p>
<p>There are not only buyers and sellers but also two system administrators: one system administrator responsible for the Daimler peer and one system administrator responsible for the General Electric peer.</p>
<p>By default, the system administrators have access to all data. That is, the Daimler system administrator has access to all data of the registered General Electric employees. Vice versa, the General Electric system administrator has access to all data of the registered Daimler employees. </p>
<p>Is it possible to restrict the access of the system administrators to a handful of rights, such as:</p>
<ol>
<li>right to install and start the business network</li>
<li>right to control changes to the system made by the other system administrator (e.g. if the Daimler system administrator changes the code of the application, then the General Electric administrator must approve those changes before they can become effective)</li>
<li>Read Access to employees of one's own company</li>
</ol>
| 0debug
|
static int filter_frame(AVFilterLink *inlink, AVFrame *insamplesref)
{
AResampleContext *aresample = inlink->dst->priv;
const int n_in = insamplesref->nb_samples;
int64_t delay;
int n_out = n_in * aresample->ratio + 32;
AVFilterLink *const outlink = inlink->dst->outputs[0];
AVFrame *outsamplesref;
int ret;
delay = swr_get_delay(aresample->swr, outlink->sample_rate);
if (delay > 0)
n_out += FFMIN(delay, FFMAX(4096, n_out));
outsamplesref = ff_get_audio_buffer(outlink, n_out);
if(!outsamplesref)
return AVERROR(ENOMEM);
av_frame_copy_props(outsamplesref, insamplesref);
outsamplesref->format = outlink->format;
outsamplesref->channels = outlink->channels;
outsamplesref->channel_layout = outlink->channel_layout;
outsamplesref->sample_rate = outlink->sample_rate;
if(insamplesref->pts != AV_NOPTS_VALUE) {
int64_t inpts = av_rescale(insamplesref->pts, inlink->time_base.num * (int64_t)outlink->sample_rate * inlink->sample_rate, inlink->time_base.den);
int64_t outpts= swr_next_pts(aresample->swr, inpts);
aresample->next_pts =
outsamplesref->pts = ROUNDED_DIV(outpts, inlink->sample_rate);
} else {
outsamplesref->pts = AV_NOPTS_VALUE;
}
n_out = swr_convert(aresample->swr, outsamplesref->extended_data, n_out,
(void *)insamplesref->extended_data, n_in);
if (n_out <= 0) {
av_frame_free(&outsamplesref);
av_frame_free(&insamplesref);
return 0;
}
aresample->more_data = outsamplesref->nb_samples == n_out;
outsamplesref->nb_samples = n_out;
ret = ff_filter_frame(outlink, outsamplesref);
av_frame_free(&insamplesref);
return ret;
}
| 1threat
|
Android : Compate two variable at differente time : I'm a new Android developer, and for my app, i would like to compare different variables over time.
My Service counts the number of SMS inside the smartphone.
private static final Uri SMS_URI_ALL = Uri.parse("content://sms/");
final List<String> messages = new ArrayList<>();
String id;
final Cursor cursor = getContentResolver().query(SmsReader.SMS_URI_ALL,null, null,null, null);
assert cursor != null;
if (cursor.moveToFirst()){
do {
id = cursor.getString(cursor.getColumnIndexOrThrow("_id"));
messages.add(id);
currentMessage = messages.size();
} while (cursor.moveToNext());
}
if (! cursor.isClosed()){
cursor.close();
}
Every 15 min this service starts, and i would like to compare the value of "currentMessage" and calculate the difference between the old value of "currentMessage" and the new value of "currentMessage" for know how many message I have sent, recevied during this period of 15min
I dont know how to implemante it, can you help me please :D
Thanks you
| 0debug
|
Run UWP on Android or iOS : <p>I want to add <strong>finished UWP</strong> app to <strong>Xamarin Forms</strong>. When i do that, can i run this <strong>finished UWP app</strong> on <strong>Android</strong> or <strong>iOS</strong> ? It will be best if you answer <strong>"Yes"</strong> or <strong>"No"</strong>. Have a nice day.</p>
| 0debug
|
def remove_similar_row(test_list):
res = set(sorted([tuple(sorted(set(sub))) for sub in test_list]))
return (res)
| 0debug
|
void qemu_peer_set_vnet_hdr_len(NetClientState *nc, int len)
{
if (!nc->peer || !nc->peer->info->set_vnet_hdr_len) {
return;
}
nc->peer->info->set_vnet_hdr_len(nc->peer, len);
}
| 1threat
|
Boolean function on C++ : I am working on a assignment that deals with boolean function on C++. It is asking for us to write a boolean function that asks user to type in character X, O , L, or I. But if the user type in lowercase of those letters it will also accept the input and if they type a word (string) that starts with one of those letters it will also accept the input.
I am just confused on how to provide a condition that involves all those.. Help please? I know the start off looks like this and it includes if-else statement but I am confused on what I need to put..
bool isValidOption(char option, string & valOptions) {
return true;
}
| 0debug
|
What is the difference between @Inject and @Injectable in Angular 2 typescript : <p>I don't understand When to use @Inject and when to use @Injectable ?</p>
<pre><code> import {Component, Inject, provide} from '@angular/core';
import {Hamburger} from '../services/hamburger';
export class App {
bunType: string;
constructor(@Inject(Hamburger) h) {
this.bunType = h.bun.type;
}
}
</code></pre>
<p>And..</p>
<pre><code> import {Injectable} from '@angular/core';
import {Bun} from './bun';
@Injectable()
export class Hamburger {
constructor(public bun: Bun) {
}
}
</code></pre>
| 0debug
|
static int read_header(AVFormatContext *s)
{
WtvContext *wtv = s->priv_data;
int root_sector, root_size;
uint8_t root[WTV_SECTOR_SIZE];
AVIOContext *pb;
int64_t timeline_pos;
int64_t ret;
wtv->epoch =
wtv->pts =
wtv->last_valid_pts = AV_NOPTS_VALUE;
avio_skip(s->pb, 0x30);
root_size = avio_rl32(s->pb);
if (root_size > sizeof(root)) {
av_log(s, AV_LOG_ERROR, "root directory size exceeds sector size\n");
return AVERROR_INVALIDDATA;
}
avio_skip(s->pb, 4);
root_sector = avio_rl32(s->pb);
ret = seek_by_sector(s->pb, root_sector, 0);
if (ret < 0)
return ret;
root_size = avio_read(s->pb, root, root_size);
if (root_size < 0)
return AVERROR_INVALIDDATA;
wtv->pb = wtvfile_open(s, root, root_size, ff_timeline_le16);
if (!wtv->pb) {
av_log(s, AV_LOG_ERROR, "timeline data missing\n");
return AVERROR_INVALIDDATA;
}
ret = parse_chunks(s, SEEK_TO_DATA, 0, 0);
if (ret < 0)
return ret;
avio_seek(wtv->pb, -32, SEEK_CUR);
timeline_pos = avio_tell(s->pb);
pb = wtvfile_open(s, root, root_size, ff_table_0_entries_legacy_attrib_le16);
if (pb) {
parse_legacy_attrib(s, pb);
wtvfile_close(pb);
}
s->ctx_flags |= AVFMTCTX_NOHEADER;
if (s->nb_streams) {
AVStream *st = s->streams[0];
pb = wtvfile_open(s, root, root_size, ff_table_0_entries_time_le16);
if (pb) {
while(1) {
uint64_t timestamp = avio_rl64(pb);
uint64_t frame_nb = avio_rl64(pb);
if (avio_feof(pb))
break;
ff_add_index_entry(&wtv->index_entries, &wtv->nb_index_entries, &wtv->index_entries_allocated_size,
0, timestamp, frame_nb, 0, AVINDEX_KEYFRAME);
}
wtvfile_close(pb);
if (wtv->nb_index_entries) {
pb = wtvfile_open(s, root, root_size, ff_timeline_table_0_entries_Events_le16);
if (pb) {
AVIndexEntry *e = wtv->index_entries;
AVIndexEntry *e_end = wtv->index_entries + wtv->nb_index_entries - 1;
uint64_t last_position = 0;
while (1) {
uint64_t frame_nb = avio_rl64(pb);
uint64_t position = avio_rl64(pb);
while (frame_nb > e->size && e <= e_end) {
e->pos = last_position;
e++;
}
if (avio_feof(pb))
break;
last_position = position;
}
e_end->pos = last_position;
wtvfile_close(pb);
st->duration = e_end->timestamp;
}
}
}
}
avio_seek(s->pb, timeline_pos, SEEK_SET);
return 0;
}
| 1threat
|
Objective - C -> Swift : <p>please help I'm new in swift and i can't "translate" code from obj-c to swift. Some literature or any help, please. Or any analogies to INSTANCETYPE in swift</p>
<pre><code>@implementation Message
+ (instancetype)messageWithString:(NSString *)message
{
return [Message messageWithString:message image:nil];
}
+ (instancetype)messageWithString:(NSString *)message image:(UIImage *)image
{
return [[Message alloc] initWithString:message image:image];
}
- (instancetype)initWithString:(NSString *)message
{
return [self initWithString:message image:nil];
}
- (instancetype)initWithString:(NSString *)message image:(UIImage *)image
{
self = [super init];
if(self)
{
_message = message;
_avatar = image;
}
return self;
}
@end
</code></pre>
<p><a href="https://pp.vk.me/c615724/v615724473/e293/Di0cnqGRISA.jpg" rel="nofollow">Screen of code</a></p>
| 0debug
|
How to add/set environment Angular 6 angular.json file : <p>How to I specify the environment to use in Angular 6? The <code>.angular-cli.json</code> file seems to have changed to <code>angular.json</code> from previous versions and with it the structure of the <code>json</code> within.</p>
<p>How/where in this file do I specify the environments to use?</p>
| 0debug
|
Xamarin.Android Proguard - Unsupported class version number 52.0 : <p>I'm trying to use Proguard in my Xamarin.Android project, yet the compilation fails with the error <code>Unsupported class version number [52.0] (maximum 51.0, Java 1.7)</code></p>
<p>I saw from those <a href="https://stackoverflow.com/questions/23170502/proguard-says-unsupported-class-version-number-52-0-maximum-51-0-java-1-7-w">two</a> <a href="https://stackoverflow.com/questions/22670059/error-proguard-unsupported-class-version-number">questions</a> that it may be a mismatch between Java 7 and Java 8, more precisely some versions of proguard don't support Java 8. However in Xamarin Preferences -> SDK Location, Java SDK points to JDK 7 : <code>/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home</code></p>
<p>Is there any place where proguard can be configured more precisely ? Any other idea ?</p>
<p>Here's the failure log :</p>
<blockquote>
<p>java.io.IOException: Can't read
[/Library/Frameworks/Xamarin.Android.framework/Versions/7.0.0-18/lib/xbuild-frameworks/MonoAndroid/v7.0/mono.android.jar]
(Can't process class [android/app/ActivityTracker.class] (Unsupported
class version number [52.0] (maximum 51.0, Java 1.7))) at
proguard.InputReader.readInput(InputReader.java:230) at
proguard.InputReader.readInput(InputReader.java:200) at
proguard.InputReader.readInput(InputReader.java:178) at
proguard.InputReader.execute(InputReader.java:78) at
proguard.ProGuard.readInput(ProGuard.java:196) at
proguard.ProGuard.execute(ProGuard.java:78) at
proguard.ProGuard.main(ProGuard.java:492) Caused by:
java.io.IOException: Can't process class
[android/app/ActivityTracker.class] (Unsupported class version number
[52.0] (maximum 51.0, Java 1.7)) at
proguard.io.ClassReader.read(ClassReader.java:112) at
proguard.io.FilteredDataEntryReader.read(FilteredDataEntryReader.java:87)
at proguard.io.JarReader.read(JarReader.java:65) at
proguard.io.DirectoryPump.readFiles(DirectoryPump.java:65) at
proguard.io.DirectoryPump.pumpDataEntries(DirectoryPump.java:53) at
proguard.InputReader.readInput(InputReader.java:226) ... 6 more
Caused by: java.lang.UnsupportedOperationException: Unsupported class
version number [52.0] (maximum 51.0, Java 1.7) at
proguard.classfile.util.ClassUtil.checkVersionNumbers(ClassUtil.java:140)
at
proguard.classfile.io.ProgramClassReader.visitProgramClass(ProgramClassReader.java:88)
at proguard.classfile.ProgramClass.accept(ProgramClass.java:346) at
proguard.io.ClassReader.read(ClassReader.java:91) ... 11 more</p>
<p>9 Warning(s) 1 Error(s)</p>
</blockquote>
| 0debug
|
Why doesn't CSS clip-path with SVG work in Safari? : <p>I have an inline svg and a background image on the masthead.
I am using css clip-path to 'clip' out the svg animation with the image below. </p>
<p>I have it working great in firefox and chrome but safari doesn't apply any of the clipping/masking at all. </p>
<p>I checked caniuse spec's before starting this project and it states the same rules and exceptions that apply to chrome, I just tested with chrome first and it worked so I continued on it figuring safari would have the same treatment. </p>
<p>I have been scratching my head trying to figure out how to get the clipping to work properly in safari with no avail. </p>
<p>How can I get this to work in safari?
Pen for reference:
<a href="https://codepen.io/H0BB5/pen/Xpawgp" rel="noreferrer">https://codepen.io/H0BB5/pen/Xpawgp</a></p>
<p>HTML</p>
<pre><code><clipPath id="cross">
<rect y="110" x="137" width="90" height="90"/>
<rect x="0" y="110" width="90" height="90"/>
<rect x="137" y="0" width="90" height="90"/>
<rect x="0" y="0" width="90" height="90"/>
</clipPath>
</code></pre>
<p>CSS</p>
<pre><code>#clipped {
margin-bottom: 20px;
clip-path: url(#cross);
}
</code></pre>
| 0debug
|
static void external_snapshot_abort(BlkActionState *common)
{
ExternalSnapshotState *state =
DO_UPCAST(ExternalSnapshotState, common, common);
if (state->new_bs) {
if (state->new_bs->backing) {
bdrv_replace_in_backing_chain(state->new_bs, state->old_bs);
}
}
}
| 1threat
|
How can I show a new view controller after 3 seconds in Swift 4 IOS : I want to show a new ViewController after I clicked on a button, but not directly. After clicking the button I want to wait 3 seconds an after the 3 seconds I want the new ViewController
| 0debug
|
static av_cold int encode_end(AVCodecContext *avctx)
{
LclEncContext *c = avctx->priv_data;
av_freep(&avctx->extradata);
deflateEnd(&c->zstream);
av_frame_free(&avctx->coded_frame);
return 0;
}
| 1threat
|
static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame_ptr, AVPacket *avpkt)
{
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
ADPCMDecodeContext *c = avctx->priv_data;
ADPCMChannelStatus *cs;
int n, m, channel, i;
short *samples;
int16_t **samples_p;
int st;
int count1, count2;
int nb_samples, coded_samples, ret;
GetByteContext gb;
bytestream2_init(&gb, buf, buf_size);
nb_samples = get_nb_samples(avctx, &gb, buf_size, &coded_samples);
if (nb_samples <= 0) {
av_log(avctx, AV_LOG_ERROR, "invalid number of samples in packet\n");
return AVERROR_INVALIDDATA;
}
c->frame.nb_samples = nb_samples;
if ((ret = ff_get_buffer(avctx, &c->frame)) < 0) {
av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n");
return ret;
}
samples = (short *)c->frame.data[0];
samples_p = (int16_t **)c->frame.extended_data;
if (coded_samples) {
if (coded_samples != nb_samples)
av_log(avctx, AV_LOG_WARNING, "mismatch in coded sample count\n");
c->frame.nb_samples = nb_samples = coded_samples;
}
st = avctx->channels == 2 ? 1 : 0;
switch(avctx->codec->id) {
case AV_CODEC_ID_ADPCM_IMA_QT:
for (channel = 0; channel < avctx->channels; channel++) {
int predictor;
int step_index;
cs = &(c->status[channel]);
predictor = sign_extend(bytestream2_get_be16u(&gb), 16);
step_index = predictor & 0x7F;
predictor &= ~0x7F;
if (cs->step_index == step_index) {
int diff = predictor - cs->predictor;
if (diff < 0)
diff = - diff;
if (diff > 0x7f)
goto update;
} else {
update:
cs->step_index = step_index;
cs->predictor = predictor;
}
if (cs->step_index > 88u){
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
channel, cs->step_index);
return AVERROR_INVALIDDATA;
}
samples = samples_p[channel];
for (m = 0; m < 64; m += 2) {
int byte = bytestream2_get_byteu(&gb);
samples[m ] = adpcm_ima_qt_expand_nibble(cs, byte & 0x0F, 3);
samples[m + 1] = adpcm_ima_qt_expand_nibble(cs, byte >> 4 , 3);
}
}
break;
case AV_CODEC_ID_ADPCM_IMA_WAV:
for(i=0; i<avctx->channels; i++){
cs = &(c->status[i]);
cs->predictor = samples_p[i][0] = sign_extend(bytestream2_get_le16u(&gb), 16);
cs->step_index = sign_extend(bytestream2_get_le16u(&gb), 16);
if (cs->step_index > 88u){
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
i, cs->step_index);
return AVERROR_INVALIDDATA;
}
}
for (n = 0; n < (nb_samples - 1) / 8; n++) {
for (i = 0; i < avctx->channels; i++) {
cs = &c->status[i];
samples = &samples_p[i][1 + n * 8];
for (m = 0; m < 8; m += 2) {
int v = bytestream2_get_byteu(&gb);
samples[m ] = adpcm_ima_expand_nibble(cs, v & 0x0F, 3);
samples[m + 1] = adpcm_ima_expand_nibble(cs, v >> 4 , 3);
}
}
}
break;
case AV_CODEC_ID_ADPCM_4XM:
for (i = 0; i < avctx->channels; i++)
c->status[i].predictor = sign_extend(bytestream2_get_le16u(&gb), 16);
for (i = 0; i < avctx->channels; i++) {
c->status[i].step_index = sign_extend(bytestream2_get_le16u(&gb), 16);
if (c->status[i].step_index > 88u) {
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
i, c->status[i].step_index);
return AVERROR_INVALIDDATA;
}
}
for (i = 0; i < avctx->channels; i++) {
samples = (int16_t *)c->frame.data[i];
cs = &c->status[i];
for (n = nb_samples >> 1; n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(cs, v & 0x0F, 4);
*samples++ = adpcm_ima_expand_nibble(cs, v >> 4 , 4);
}
}
break;
case AV_CODEC_ID_ADPCM_MS:
{
int block_predictor;
block_predictor = bytestream2_get_byteu(&gb);
if (block_predictor > 6) {
av_log(avctx, AV_LOG_ERROR, "ERROR: block_predictor[0] = %d\n",
block_predictor);
return AVERROR_INVALIDDATA;
}
c->status[0].coeff1 = ff_adpcm_AdaptCoeff1[block_predictor];
c->status[0].coeff2 = ff_adpcm_AdaptCoeff2[block_predictor];
if (st) {
block_predictor = bytestream2_get_byteu(&gb);
if (block_predictor > 6) {
av_log(avctx, AV_LOG_ERROR, "ERROR: block_predictor[1] = %d\n",
block_predictor);
return AVERROR_INVALIDDATA;
}
c->status[1].coeff1 = ff_adpcm_AdaptCoeff1[block_predictor];
c->status[1].coeff2 = ff_adpcm_AdaptCoeff2[block_predictor];
}
c->status[0].idelta = sign_extend(bytestream2_get_le16u(&gb), 16);
if (st){
c->status[1].idelta = sign_extend(bytestream2_get_le16u(&gb), 16);
}
c->status[0].sample1 = sign_extend(bytestream2_get_le16u(&gb), 16);
if (st) c->status[1].sample1 = sign_extend(bytestream2_get_le16u(&gb), 16);
c->status[0].sample2 = sign_extend(bytestream2_get_le16u(&gb), 16);
if (st) c->status[1].sample2 = sign_extend(bytestream2_get_le16u(&gb), 16);
*samples++ = c->status[0].sample2;
if (st) *samples++ = c->status[1].sample2;
*samples++ = c->status[0].sample1;
if (st) *samples++ = c->status[1].sample1;
for(n = (nb_samples - 2) >> (1 - st); n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ms_expand_nibble(&c->status[0 ], byte >> 4 );
*samples++ = adpcm_ms_expand_nibble(&c->status[st], byte & 0x0F);
}
break;
}
case AV_CODEC_ID_ADPCM_IMA_DK4:
for (channel = 0; channel < avctx->channels; channel++) {
cs = &c->status[channel];
cs->predictor = *samples++ = sign_extend(bytestream2_get_le16u(&gb), 16);
cs->step_index = sign_extend(bytestream2_get_le16u(&gb), 16);
if (cs->step_index > 88u){
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
channel, cs->step_index);
return AVERROR_INVALIDDATA;
}
}
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[0 ], v >> 4 , 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], v & 0x0F, 3);
}
break;
case AV_CODEC_ID_ADPCM_IMA_DK3:
{
int last_byte = 0;
int nibble;
int decode_top_nibble_next = 0;
int diff_channel;
const int16_t *samples_end = samples + avctx->channels * nb_samples;
bytestream2_skipu(&gb, 10);
c->status[0].predictor = sign_extend(bytestream2_get_le16u(&gb), 16);
c->status[1].predictor = sign_extend(bytestream2_get_le16u(&gb), 16);
c->status[0].step_index = bytestream2_get_byteu(&gb);
c->status[1].step_index = bytestream2_get_byteu(&gb);
if (c->status[0].step_index > 88u || c->status[1].step_index > 88u){
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index = %i/%i\n",
c->status[0].step_index, c->status[1].step_index);
return AVERROR_INVALIDDATA;
}
diff_channel = c->status[1].predictor;
#define DK3_GET_NEXT_NIBBLE() \
if (decode_top_nibble_next) { \
nibble = last_byte >> 4; \
decode_top_nibble_next = 0; \
} else { \
last_byte = bytestream2_get_byteu(&gb); \
nibble = last_byte & 0x0F; \
decode_top_nibble_next = 1; \
}
while (samples < samples_end) {
DK3_GET_NEXT_NIBBLE();
adpcm_ima_expand_nibble(&c->status[0], nibble, 3);
DK3_GET_NEXT_NIBBLE();
adpcm_ima_expand_nibble(&c->status[1], nibble, 3);
diff_channel = (diff_channel + c->status[1].predictor) / 2;
*samples++ = c->status[0].predictor + c->status[1].predictor;
*samples++ = c->status[0].predictor - c->status[1].predictor;
DK3_GET_NEXT_NIBBLE();
adpcm_ima_expand_nibble(&c->status[0], nibble, 3);
diff_channel = (diff_channel + c->status[1].predictor) / 2;
*samples++ = c->status[0].predictor + c->status[1].predictor;
*samples++ = c->status[0].predictor - c->status[1].predictor;
}
break;
}
case AV_CODEC_ID_ADPCM_IMA_ISS:
for (channel = 0; channel < avctx->channels; channel++) {
cs = &c->status[channel];
cs->predictor = sign_extend(bytestream2_get_le16u(&gb), 16);
cs->step_index = sign_extend(bytestream2_get_le16u(&gb), 16);
if (cs->step_index > 88u){
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
channel, cs->step_index);
return AVERROR_INVALIDDATA;
}
}
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v1, v2;
int v = bytestream2_get_byteu(&gb);
if (st) {
v1 = v >> 4;
v2 = v & 0x0F;
} else {
v2 = v >> 4;
v1 = v & 0x0F;
}
*samples++ = adpcm_ima_expand_nibble(&c->status[0 ], v1, 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], v2, 3);
}
break;
case AV_CODEC_ID_ADPCM_IMA_APC:
while (bytestream2_get_bytes_left(&gb) > 0) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[0], v >> 4 , 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], v & 0x0F, 3);
}
break;
case AV_CODEC_ID_ADPCM_IMA_OKI:
while (bytestream2_get_bytes_left(&gb) > 0) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_oki_expand_nibble(&c->status[0], v >> 4 );
*samples++ = adpcm_ima_oki_expand_nibble(&c->status[st], v & 0x0F);
}
break;
case AV_CODEC_ID_ADPCM_IMA_WS:
if (c->vqa_version == 3) {
for (channel = 0; channel < avctx->channels; channel++) {
int16_t *smp = samples_p[channel];
for (n = nb_samples / 2; n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*smp++ = adpcm_ima_expand_nibble(&c->status[channel], v >> 4 , 3);
*smp++ = adpcm_ima_expand_nibble(&c->status[channel], v & 0x0F, 3);
}
}
} else {
for (n = nb_samples / 2; n > 0; n--) {
for (channel = 0; channel < avctx->channels; channel++) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[channel], v >> 4 , 3);
samples[st] = adpcm_ima_expand_nibble(&c->status[channel], v & 0x0F, 3);
}
samples += avctx->channels;
}
}
bytestream2_seek(&gb, 0, SEEK_END);
break;
case AV_CODEC_ID_ADPCM_XA:
{
int16_t *out0 = samples_p[0];
int16_t *out1 = samples_p[1];
int samples_per_block = 28 * (3 - avctx->channels) * 4;
int sample_offset = 0;
while (bytestream2_get_bytes_left(&gb) >= 128) {
if ((ret = xa_decode(avctx, out0, out1, buf + bytestream2_tell(&gb),
&c->status[0], &c->status[1],
avctx->channels, sample_offset)) < 0)
return ret;
bytestream2_skipu(&gb, 128);
sample_offset += samples_per_block;
}
break;
}
case AV_CODEC_ID_ADPCM_IMA_EA_EACS:
for (i=0; i<=st; i++) {
c->status[i].step_index = bytestream2_get_le32u(&gb);
if (c->status[i].step_index > 88u) {
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index[%d] = %i\n",
i, c->status[i].step_index);
return AVERROR_INVALIDDATA;
}
}
for (i=0; i<=st; i++)
c->status[i].predictor = bytestream2_get_le32u(&gb);
for (n = nb_samples >> (1 - st); n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[0], byte >> 4, 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], byte & 0x0F, 3);
}
break;
case AV_CODEC_ID_ADPCM_IMA_EA_SEAD:
for (n = nb_samples >> (1 - st); n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[0], byte >> 4, 6);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], byte & 0x0F, 6);
}
break;
case AV_CODEC_ID_ADPCM_EA:
{
int previous_left_sample, previous_right_sample;
int current_left_sample, current_right_sample;
int next_left_sample, next_right_sample;
int coeff1l, coeff2l, coeff1r, coeff2r;
int shift_left, shift_right;
if(avctx->channels != 2)
return AVERROR_INVALIDDATA;
current_left_sample = sign_extend(bytestream2_get_le16u(&gb), 16);
previous_left_sample = sign_extend(bytestream2_get_le16u(&gb), 16);
current_right_sample = sign_extend(bytestream2_get_le16u(&gb), 16);
previous_right_sample = sign_extend(bytestream2_get_le16u(&gb), 16);
for (count1 = 0; count1 < nb_samples / 28; count1++) {
int byte = bytestream2_get_byteu(&gb);
coeff1l = ea_adpcm_table[ byte >> 4 ];
coeff2l = ea_adpcm_table[(byte >> 4 ) + 4];
coeff1r = ea_adpcm_table[ byte & 0x0F];
coeff2r = ea_adpcm_table[(byte & 0x0F) + 4];
byte = bytestream2_get_byteu(&gb);
shift_left = 20 - (byte >> 4);
shift_right = 20 - (byte & 0x0F);
for (count2 = 0; count2 < 28; count2++) {
byte = bytestream2_get_byteu(&gb);
next_left_sample = sign_extend(byte >> 4, 4) << shift_left;
next_right_sample = sign_extend(byte, 4) << shift_right;
next_left_sample = (next_left_sample +
(current_left_sample * coeff1l) +
(previous_left_sample * coeff2l) + 0x80) >> 8;
next_right_sample = (next_right_sample +
(current_right_sample * coeff1r) +
(previous_right_sample * coeff2r) + 0x80) >> 8;
previous_left_sample = current_left_sample;
current_left_sample = av_clip_int16(next_left_sample);
previous_right_sample = current_right_sample;
current_right_sample = av_clip_int16(next_right_sample);
*samples++ = current_left_sample;
*samples++ = current_right_sample;
}
}
bytestream2_skip(&gb, 2);
break;
}
case AV_CODEC_ID_ADPCM_EA_MAXIS_XA:
{
int coeff[2][2], shift[2];
for(channel = 0; channel < avctx->channels; channel++) {
int byte = bytestream2_get_byteu(&gb);
for (i=0; i<2; i++)
coeff[channel][i] = ea_adpcm_table[(byte >> 4) + 4*i];
shift[channel] = 20 - (byte & 0x0F);
}
for (count1 = 0; count1 < nb_samples / 2; count1++) {
int byte[2];
byte[0] = bytestream2_get_byteu(&gb);
if (st) byte[1] = bytestream2_get_byteu(&gb);
for(i = 4; i >= 0; i-=4) {
for(channel = 0; channel < avctx->channels; channel++) {
int sample = sign_extend(byte[channel] >> i, 4) << shift[channel];
sample = (sample +
c->status[channel].sample1 * coeff[channel][0] +
c->status[channel].sample2 * coeff[channel][1] + 0x80) >> 8;
c->status[channel].sample2 = c->status[channel].sample1;
c->status[channel].sample1 = av_clip_int16(sample);
*samples++ = c->status[channel].sample1;
}
}
}
bytestream2_seek(&gb, 0, SEEK_END);
break;
}
case AV_CODEC_ID_ADPCM_EA_R1:
case AV_CODEC_ID_ADPCM_EA_R2:
case AV_CODEC_ID_ADPCM_EA_R3: {
const int big_endian = avctx->codec->id == AV_CODEC_ID_ADPCM_EA_R3;
int previous_sample, current_sample, next_sample;
int coeff1, coeff2;
int shift;
unsigned int channel;
uint16_t *samplesC;
int count = 0;
int offsets[6];
for (channel=0; channel<avctx->channels; channel++)
offsets[channel] = (big_endian ? bytestream2_get_be32(&gb) :
bytestream2_get_le32(&gb)) +
(avctx->channels + 1) * 4;
for (channel=0; channel<avctx->channels; channel++) {
bytestream2_seek(&gb, offsets[channel], SEEK_SET);
samplesC = samples_p[channel];
if (avctx->codec->id == AV_CODEC_ID_ADPCM_EA_R1) {
current_sample = sign_extend(bytestream2_get_le16(&gb), 16);
previous_sample = sign_extend(bytestream2_get_le16(&gb), 16);
} else {
current_sample = c->status[channel].predictor;
previous_sample = c->status[channel].prev_sample;
}
for (count1 = 0; count1 < nb_samples / 28; count1++) {
int byte = bytestream2_get_byte(&gb);
if (byte == 0xEE) {
current_sample = sign_extend(bytestream2_get_be16(&gb), 16);
previous_sample = sign_extend(bytestream2_get_be16(&gb), 16);
for (count2=0; count2<28; count2++)
*samplesC++ = sign_extend(bytestream2_get_be16(&gb), 16);
} else {
coeff1 = ea_adpcm_table[ byte >> 4 ];
coeff2 = ea_adpcm_table[(byte >> 4) + 4];
shift = 20 - (byte & 0x0F);
for (count2=0; count2<28; count2++) {
if (count2 & 1)
next_sample = sign_extend(byte, 4) << shift;
else {
byte = bytestream2_get_byte(&gb);
next_sample = sign_extend(byte >> 4, 4) << shift;
}
next_sample += (current_sample * coeff1) +
(previous_sample * coeff2);
next_sample = av_clip_int16(next_sample >> 8);
previous_sample = current_sample;
current_sample = next_sample;
*samplesC++ = current_sample;
}
}
}
if (!count) {
count = count1;
} else if (count != count1) {
av_log(avctx, AV_LOG_WARNING, "per-channel sample count mismatch\n");
count = FFMAX(count, count1);
}
if (avctx->codec->id != AV_CODEC_ID_ADPCM_EA_R1) {
c->status[channel].predictor = current_sample;
c->status[channel].prev_sample = previous_sample;
}
}
c->frame.nb_samples = count * 28;
bytestream2_seek(&gb, 0, SEEK_END);
break;
}
case AV_CODEC_ID_ADPCM_EA_XAS:
for (channel=0; channel<avctx->channels; channel++) {
int coeff[2][4], shift[4];
int16_t *s = samples_p[channel];
for (n = 0; n < 4; n++, s += 32) {
int val = sign_extend(bytestream2_get_le16u(&gb), 16);
for (i=0; i<2; i++)
coeff[i][n] = ea_adpcm_table[(val&0x0F)+4*i];
s[0] = val & ~0x0F;
val = sign_extend(bytestream2_get_le16u(&gb), 16);
shift[n] = 20 - (val & 0x0F);
s[1] = val & ~0x0F;
}
for (m=2; m<32; m+=2) {
s = &samples_p[channel][m];
for (n = 0; n < 4; n++, s += 32) {
int level, pred;
int byte = bytestream2_get_byteu(&gb);
level = sign_extend(byte >> 4, 4) << shift[n];
pred = s[-1] * coeff[0][n] + s[-2] * coeff[1][n];
s[0] = av_clip_int16((level + pred + 0x80) >> 8);
level = sign_extend(byte, 4) << shift[n];
pred = s[0] * coeff[0][n] + s[-1] * coeff[1][n];
s[1] = av_clip_int16((level + pred + 0x80) >> 8);
}
}
}
break;
case AV_CODEC_ID_ADPCM_IMA_AMV:
c->status[0].predictor = sign_extend(bytestream2_get_le16u(&gb), 16);
c->status[0].step_index = bytestream2_get_le16u(&gb);
bytestream2_skipu(&gb, 4);
if (c->status[0].step_index > 88u) {
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index = %i\n",
c->status[0].step_index);
return AVERROR_INVALIDDATA;
}
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_expand_nibble(&c->status[0], v >> 4, 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[0], v & 0xf, 3);
}
break;
case AV_CODEC_ID_ADPCM_IMA_SMJPEG:
for (i = 0; i < avctx->channels; i++) {
c->status[i].predictor = sign_extend(bytestream2_get_be16u(&gb), 16);
c->status[i].step_index = bytestream2_get_byteu(&gb);
bytestream2_skipu(&gb, 1);
if (c->status[i].step_index > 88u) {
av_log(avctx, AV_LOG_ERROR, "ERROR: step_index = %i\n",
c->status[i].step_index);
return AVERROR_INVALIDDATA;
}
}
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ima_qt_expand_nibble(&c->status[0 ], v >> 4, 3);
*samples++ = adpcm_ima_qt_expand_nibble(&c->status[st], v & 0xf, 3);
}
break;
case AV_CODEC_ID_ADPCM_CT:
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_ct_expand_nibble(&c->status[0 ], v >> 4 );
*samples++ = adpcm_ct_expand_nibble(&c->status[st], v & 0x0F);
}
break;
case AV_CODEC_ID_ADPCM_SBPRO_4:
case AV_CODEC_ID_ADPCM_SBPRO_3:
case AV_CODEC_ID_ADPCM_SBPRO_2:
if (!c->status[0].step_index) {
*samples++ = 128 * (bytestream2_get_byteu(&gb) - 0x80);
if (st)
*samples++ = 128 * (bytestream2_get_byteu(&gb) - 0x80);
c->status[0].step_index = 1;
nb_samples--;
}
if (avctx->codec->id == AV_CODEC_ID_ADPCM_SBPRO_4) {
for (n = nb_samples >> (1 - st); n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
byte >> 4, 4, 0);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[st],
byte & 0x0F, 4, 0);
}
} else if (avctx->codec->id == AV_CODEC_ID_ADPCM_SBPRO_3) {
for (n = nb_samples / 3; n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
byte >> 5 , 3, 0);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
(byte >> 2) & 0x07, 3, 0);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
byte & 0x03, 2, 0);
}
} else {
for (n = nb_samples >> (2 - st); n > 0; n--) {
int byte = bytestream2_get_byteu(&gb);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
byte >> 6 , 2, 2);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[st],
(byte >> 4) & 0x03, 2, 2);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[0],
(byte >> 2) & 0x03, 2, 2);
*samples++ = adpcm_sbpro_expand_nibble(&c->status[st],
byte & 0x03, 2, 2);
}
}
break;
case AV_CODEC_ID_ADPCM_SWF:
adpcm_swf_decode(avctx, buf, buf_size, samples);
bytestream2_seek(&gb, 0, SEEK_END);
break;
case AV_CODEC_ID_ADPCM_YAMAHA:
for (n = nb_samples >> (1 - st); n > 0; n--) {
int v = bytestream2_get_byteu(&gb);
*samples++ = adpcm_yamaha_expand_nibble(&c->status[0 ], v & 0x0F);
*samples++ = adpcm_yamaha_expand_nibble(&c->status[st], v >> 4 );
}
break;
case AV_CODEC_ID_ADPCM_AFC:
{
int samples_per_block;
int blocks;
if (avctx->extradata && avctx->extradata_size == 1 && avctx->extradata[0]) {
samples_per_block = avctx->extradata[0] / 16;
blocks = nb_samples / avctx->extradata[0];
} else {
samples_per_block = nb_samples / 16;
blocks = 1;
}
for (m = 0; m < blocks; m++) {
for (channel = 0; channel < avctx->channels; channel++) {
int prev1 = c->status[channel].sample1;
int prev2 = c->status[channel].sample2;
samples = samples_p[channel] + m * 16;
for (i = 0; i < samples_per_block; i++) {
int byte = bytestream2_get_byteu(&gb);
int scale = 1 << (byte >> 4);
int index = byte & 0xf;
int factor1 = ff_adpcm_afc_coeffs[0][index];
int factor2 = ff_adpcm_afc_coeffs[1][index];
for (n = 0; n < 16; n++) {
int32_t sampledat;
if (n & 1) {
sampledat = sign_extend(byte, 4);
} else {
byte = bytestream2_get_byteu(&gb);
sampledat = sign_extend(byte >> 4, 4);
}
sampledat = ((prev1 * factor1 + prev2 * factor2) +
((sampledat * scale) << 11)) >> 11;
*samples = av_clip_int16(sampledat);
prev2 = prev1;
prev1 = *samples++;
}
}
c->status[channel].sample1 = prev1;
c->status[channel].sample2 = prev2;
}
}
bytestream2_seek(&gb, 0, SEEK_END);
break;
}
case AV_CODEC_ID_ADPCM_THP:
{
int table[6][16];
int ch;
for (i = 0; i < avctx->channels; i++)
for (n = 0; n < 16; n++)
table[i][n] = sign_extend(bytestream2_get_be16u(&gb), 16);
for (i = 0; i < avctx->channels; i++) {
c->status[i].sample1 = sign_extend(bytestream2_get_be16u(&gb), 16);
c->status[i].sample2 = sign_extend(bytestream2_get_be16u(&gb), 16);
}
for (ch = 0; ch < avctx->channels; ch++) {
samples = samples_p[ch];
for (i = 0; i < nb_samples / 14; i++) {
int byte = bytestream2_get_byteu(&gb);
int index = (byte >> 4) & 7;
unsigned int exp = byte & 0x0F;
int factor1 = table[ch][index * 2];
int factor2 = table[ch][index * 2 + 1];
for (n = 0; n < 14; n++) {
int32_t sampledat;
if (n & 1) {
sampledat = sign_extend(byte, 4);
} else {
byte = bytestream2_get_byteu(&gb);
sampledat = sign_extend(byte >> 4, 4);
}
sampledat = ((c->status[ch].sample1 * factor1
+ c->status[ch].sample2 * factor2) >> 11) + (sampledat << exp);
*samples = av_clip_int16(sampledat);
c->status[ch].sample2 = c->status[ch].sample1;
c->status[ch].sample1 = *samples++;
}
}
}
break;
}
default:
return -1;
}
if (avpkt->size && bytestream2_tell(&gb) == 0) {
av_log(avctx, AV_LOG_ERROR, "Nothing consumed\n");
return AVERROR_INVALIDDATA;
}
*got_frame_ptr = 1;
*(AVFrame *)data = c->frame;
return bytestream2_tell(&gb);
}
| 1threat
|
How do I graph on custom plots in ggplot2? : <p>I am using a plot obtained from the following site:
<a href="https://github.com/statsbylopez/blogposts/blob/master/fball_field.R" rel="nofollow noreferrer">https://github.com/statsbylopez/blogposts/blob/master/fball_field.R</a>
I do not know how to plot points on this. How would I go about doing this?</p>
| 0debug
|
static void lance_init(NICInfo *nd, target_phys_addr_t leaddr,
void *dma_opaque, qemu_irq irq)
{
DeviceState *dev;
SysBusDevice *s;
qemu_irq reset;
qemu_check_nic_model(&nd_table[0], "lance");
dev = qdev_create(NULL, "lance");
dev->nd = nd;
qdev_prop_set_ptr(dev, "dma", dma_opaque);
qdev_init(dev);
s = sysbus_from_qdev(dev);
sysbus_mmio_map(s, 0, leaddr);
sysbus_connect_irq(s, 0, irq);
reset = qdev_get_gpio_in(dev, 0);
qdev_connect_gpio_out(dma_opaque, 0, reset);
}
| 1threat
|
What is the significance of keys in React? : <p>What is the significance of keys in React? I read that using the index in the loop is not the best solution for keys. Why?</p>
| 0debug
|
int qcow2_check_refcounts(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
int64_t size;
int nb_clusters, refcount1, refcount2, i;
QCowSnapshot *sn;
uint16_t *refcount_table;
int ret, errors = 0;
size = bdrv_getlength(bs->file);
nb_clusters = size_to_clusters(s, size);
refcount_table = qemu_mallocz(nb_clusters * sizeof(uint16_t));
errors += inc_refcounts(bs, refcount_table, nb_clusters,
0, s->cluster_size);
ret = check_refcounts_l1(bs, refcount_table, nb_clusters,
s->l1_table_offset, s->l1_size, 1);
if (ret < 0) {
return ret;
}
errors += ret;
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
check_refcounts_l1(bs, refcount_table, nb_clusters,
sn->l1_table_offset, sn->l1_size, 0);
}
errors += inc_refcounts(bs, refcount_table, nb_clusters,
s->snapshots_offset, s->snapshots_size);
errors += inc_refcounts(bs, refcount_table, nb_clusters,
s->refcount_table_offset,
s->refcount_table_size * sizeof(uint64_t));
for(i = 0; i < s->refcount_table_size; i++) {
int64_t offset;
offset = s->refcount_table[i];
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR refcount block %d is not "
"cluster aligned; refcount table entry corrupted\n", i);
errors++;
}
if (offset != 0) {
errors += inc_refcounts(bs, refcount_table, nb_clusters,
offset, s->cluster_size);
if (refcount_table[offset / s->cluster_size] != 1) {
fprintf(stderr, "ERROR refcount block %d refcount=%d\n",
i, refcount_table[offset / s->cluster_size]);
}
}
}
for(i = 0; i < nb_clusters; i++) {
refcount1 = get_refcount(bs, i);
if (refcount1 < 0) {
fprintf(stderr, "Can't get refcount for cluster %d: %s\n",
i, strerror(-refcount1));
}
refcount2 = refcount_table[i];
if (refcount1 != refcount2) {
fprintf(stderr, "ERROR cluster %d refcount=%d reference=%d\n",
i, refcount1, refcount2);
errors++;
}
}
qemu_free(refcount_table);
return errors;
}
| 1threat
|
const AVOption *av_opt_find(void *obj, const char *name, const char *unit,
int opt_flags, int search_flags)
{
AVClass *c = *(AVClass**)obj;
const AVOption *o = NULL;
if (c->opt_find && search_flags & AV_OPT_SEARCH_CHILDREN &&
(o = c->opt_find(obj, name, unit, opt_flags, search_flags)))
return o;
while (o = av_next_option(obj, o)) {
if (!strcmp(o->name, name) && (!unit || (o->unit && !strcmp(o->unit, unit))) &&
(o->flags & opt_flags) == opt_flags)
return o;
}
return NULL;
}
| 1threat
|
How Can I have IIS properly serve .webmanifest files on my web site? : <p>The <a href="https://realfavicongenerator.net/" rel="noreferrer">Favicon Generator</a> assembles a package for webmasters to use in order to have icons available for many different devices. The page comes with a file called <code>site.manifest</code> which is linked to via the following tag in the web page's document <code><head></code>:</p>
<pre><code><link rel="manifest" href="site.webmanifest">
</code></pre>
<p>According to <a href="https://developer.mozilla.org/en-US/docs/Web/Manifest" rel="noreferrer">Mozilla</a>: <em>"The web app manifest provides information about an application (such as name, author, icon, and description) in a JSON text file. The purpose of the manifest is to install web applications to the homescreen of a device, providing users with quicker access and a richer experience."</em></p>
<p>Unfortunately if you are using Microsoft's Internet Information Services (IIS), you'll get a 404.3 error if you try and access the <code>site.webmanifest</code> file.</p>
<p>The exact error message is as follows: <em>"The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map."</em></p>
<p>How can I properly serve <code>site.webmanifest</code> files in IIS?</p>
| 0debug
|
static int mxf_set_audio_pts(MXFContext *mxf, AVCodecContext *codec, AVPacket *pkt)
{
MXFTrack *track = mxf->fc->streams[pkt->stream_index]->priv_data;
pkt->pts = track->sample_count;
if (codec->channels <= 0 || av_get_bits_per_sample(codec->codec_id) <= 0)
return AVERROR(EINVAL);
track->sample_count += pkt->size / (codec->channels * av_get_bits_per_sample(codec->codec_id) / 8);
return 0;
}
| 1threat
|
C# - How to use variables within other functions. : I am trying to use variable rand from the Rnd function within the BtnRed_Click Function. The return does not do anything. And if i change the parameters for Rnd function i get an error.
Here my Code.
namespace ColourPick_EDP2
{
public partial class Form1 : Form
{
int RandomNum;
int counter = 60;
int rand;
public Form1()
{
InitializeComponent();
}
private void BtnStart_Click(object sender, EventArgs e)
{
BtnStart.Visible = false;
Tmr.Tick += new EventHandler(tmr_Tick);
Tmr.Start();
TbxDisplay.Text = counter.ToString();
Random rnd = new Random();
RandomNum = rnd.Next(0, 7);
Rnd(RandomNum);
}
private void tmr_Tick(object sender, EventArgs e)
{
counter--;
if (counter == 0)
{
Tmr.Stop();
}
TbxDisplay.Text = counter.ToString();
}
private int Rnd(int rand)
{
int rand = rand;
switch(rand)
{
case 1:
this.BackColor = System.Drawing.Color.Red;
break;
case 2:
this.BackColor = System.Drawing.Color.Blue;
break;
case 3:
this.BackColor = System.Drawing.Color.Green;
break;
case 4:
this.BackColor = System.Drawing.Color.Yellow;
break;
case 5:
this.BackColor = System.Drawing.Color.Purple;
break;
case 6:
this.BackColor = System.Drawing.Color.Orange;
break;
}
return rand;
}
private void BtnRed_Click(object sender, EventArgs e)
{
if (rand == 1)
{
lblScore.Text += 10;
}
}
}
}
| 0debug
|
How can an assignment call a function? : <p>My activity:</p>
<pre><code>class PlayerDetails : AppCompatActivity() {
private lateinit var binding: ActivityPlayerDetailsBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = DataBindingUtil.setContentView(this, R.layout.activity_player_details)
}
</code></pre>
<p>How can an assignment (<code>binding = DataBindingUtil.setContentView(this, R.layout.activity_player_details</code>) call a function? (<code>setContentView()</code>)?</p>
| 0debug
|
Object with rows to nested object : **Before**
This is an object with multiple rows:
{
"functions": [
{
"package_id": "2",
"module_id": "2",
"data_id": "2"
},
{
"package_id": "1",
"module_id": "1",
"data_id": "2"
},
{
"package_id": "2",
"module_id": "3",
"data_id": "3"
}
]
}
**Desired result**
I want this to return into a "nested" Object like below, without duplicates:
{
"packages": [
{
"package_id": "2",
"modules": [
{
"module_id": "2",
"data": [
{
"data_id": "2"
}
]
},
{
"module_id": "3",
"data": [
{
"data_id": "3"
}
]
}
]
},{
"package_id": "1",
"modules": [
{
"module_id": "1",
"data": [
{
"data_id": "2"
}
]
}
]
}
]
}
I've already tried loops inside loops, with constructing multiple arrays and objects. Which causes duplicates or overriding objects into single ones. Is there a more generic way to generate this with JavaScript? (It's for an Angular (6) project.
| 0debug
|
void ff_h264_pred_init_x86(H264PredContext *h, int codec_id, const int bit_depth, const int chroma_format_idc)
{
#if HAVE_YASM
int mm_flags = av_get_cpu_flags();
if (bit_depth == 8) {
if (mm_flags & AV_CPU_FLAG_MMX) {
h->pred16x16[VERT_PRED8x8 ] = ff_pred16x16_vertical_mmx;
h->pred16x16[HOR_PRED8x8 ] = ff_pred16x16_horizontal_mmx;
if (chroma_format_idc == 1) {
h->pred8x8 [VERT_PRED8x8 ] = ff_pred8x8_vertical_mmx;
h->pred8x8 [HOR_PRED8x8 ] = ff_pred8x8_horizontal_mmx;
}
if (codec_id == AV_CODEC_ID_VP8) {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_tm_vp8_mmx;
h->pred8x8 [PLANE_PRED8x8 ] = ff_pred8x8_tm_vp8_mmx;
h->pred4x4 [TM_VP8_PRED ] = ff_pred4x4_tm_vp8_mmx;
} else {
if (chroma_format_idc == 1)
h->pred8x8 [PLANE_PRED8x8] = ff_pred8x8_plane_mmx;
if (codec_id == AV_CODEC_ID_SVQ3) {
if (mm_flags & AV_CPU_FLAG_CMOV)
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_svq3_mmx;
} else if (codec_id == AV_CODEC_ID_RV40) {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_rv40_mmx;
} else {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_h264_mmx;
}
}
}
if (mm_flags & AV_CPU_FLAG_MMXEXT) {
h->pred16x16[HOR_PRED8x8 ] = ff_pred16x16_horizontal_mmx2;
h->pred16x16[DC_PRED8x8 ] = ff_pred16x16_dc_mmx2;
if (chroma_format_idc == 1)
h->pred8x8[HOR_PRED8x8 ] = ff_pred8x8_horizontal_mmx2;
h->pred8x8l [TOP_DC_PRED ] = ff_pred8x8l_top_dc_mmxext;
h->pred8x8l [DC_PRED ] = ff_pred8x8l_dc_mmxext;
h->pred8x8l [HOR_PRED ] = ff_pred8x8l_horizontal_mmxext;
h->pred8x8l [VERT_PRED ] = ff_pred8x8l_vertical_mmxext;
h->pred8x8l [DIAG_DOWN_RIGHT_PRED ] = ff_pred8x8l_down_right_mmxext;
h->pred8x8l [VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_mmxext;
h->pred8x8l [HOR_UP_PRED ] = ff_pred8x8l_horizontal_up_mmxext;
h->pred8x8l [DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_mmxext;
h->pred8x8l [HOR_DOWN_PRED ] = ff_pred8x8l_horizontal_down_mmxext;
h->pred4x4 [DIAG_DOWN_RIGHT_PRED ] = ff_pred4x4_down_right_mmxext;
h->pred4x4 [VERT_RIGHT_PRED ] = ff_pred4x4_vertical_right_mmxext;
h->pred4x4 [HOR_DOWN_PRED ] = ff_pred4x4_horizontal_down_mmxext;
h->pred4x4 [DC_PRED ] = ff_pred4x4_dc_mmxext;
if (codec_id == AV_CODEC_ID_VP8 || codec_id == AV_CODEC_ID_H264) {
h->pred4x4 [DIAG_DOWN_LEFT_PRED] = ff_pred4x4_down_left_mmxext;
}
if (codec_id == AV_CODEC_ID_SVQ3 || codec_id == AV_CODEC_ID_H264) {
h->pred4x4 [VERT_LEFT_PRED ] = ff_pred4x4_vertical_left_mmxext;
}
if (codec_id != AV_CODEC_ID_RV40) {
h->pred4x4 [HOR_UP_PRED ] = ff_pred4x4_horizontal_up_mmxext;
}
if (codec_id == AV_CODEC_ID_SVQ3 || codec_id == AV_CODEC_ID_H264) {
if (chroma_format_idc == 1) {
h->pred8x8[TOP_DC_PRED8x8 ] = ff_pred8x8_top_dc_mmxext;
h->pred8x8[DC_PRED8x8 ] = ff_pred8x8_dc_mmxext;
}
}
if (codec_id == AV_CODEC_ID_VP8) {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_tm_vp8_mmx2;
h->pred8x8 [DC_PRED8x8 ] = ff_pred8x8_dc_rv40_mmxext;
h->pred8x8 [PLANE_PRED8x8 ] = ff_pred8x8_tm_vp8_mmx2;
h->pred4x4 [TM_VP8_PRED ] = ff_pred4x4_tm_vp8_mmx2;
h->pred4x4 [VERT_PRED ] = ff_pred4x4_vertical_vp8_mmxext;
} else {
if (chroma_format_idc == 1)
h->pred8x8 [PLANE_PRED8x8] = ff_pred8x8_plane_mmx2;
if (codec_id == AV_CODEC_ID_SVQ3) {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_plane_svq3_mmx2;
} else if (codec_id == AV_CODEC_ID_RV40) {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_plane_rv40_mmx2;
} else {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_plane_h264_mmx2;
}
}
}
if (mm_flags & AV_CPU_FLAG_SSE) {
h->pred16x16[VERT_PRED8x8] = ff_pred16x16_vertical_sse;
}
if (mm_flags & AV_CPU_FLAG_SSE2) {
h->pred16x16[DC_PRED8x8 ] = ff_pred16x16_dc_sse2;
h->pred8x8l [DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_sse2;
h->pred8x8l [DIAG_DOWN_RIGHT_PRED ] = ff_pred8x8l_down_right_sse2;
h->pred8x8l [VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_sse2;
h->pred8x8l [VERT_LEFT_PRED ] = ff_pred8x8l_vertical_left_sse2;
h->pred8x8l [HOR_DOWN_PRED ] = ff_pred8x8l_horizontal_down_sse2;
if (codec_id == AV_CODEC_ID_VP8) {
h->pred16x16[PLANE_PRED8x8 ] = ff_pred16x16_tm_vp8_sse2;
h->pred8x8 [PLANE_PRED8x8 ] = ff_pred8x8_tm_vp8_sse2;
} else {
if (chroma_format_idc == 1)
h->pred8x8 [PLANE_PRED8x8] = ff_pred8x8_plane_sse2;
if (codec_id == AV_CODEC_ID_SVQ3) {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_svq3_sse2;
} else if (codec_id == AV_CODEC_ID_RV40) {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_rv40_sse2;
} else {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_h264_sse2;
}
}
}
if (mm_flags & AV_CPU_FLAG_SSSE3) {
h->pred16x16[HOR_PRED8x8 ] = ff_pred16x16_horizontal_ssse3;
h->pred16x16[DC_PRED8x8 ] = ff_pred16x16_dc_ssse3;
if (chroma_format_idc == 1)
h->pred8x8 [HOR_PRED8x8 ] = ff_pred8x8_horizontal_ssse3;
h->pred8x8l [TOP_DC_PRED ] = ff_pred8x8l_top_dc_ssse3;
h->pred8x8l [DC_PRED ] = ff_pred8x8l_dc_ssse3;
h->pred8x8l [HOR_PRED ] = ff_pred8x8l_horizontal_ssse3;
h->pred8x8l [VERT_PRED ] = ff_pred8x8l_vertical_ssse3;
h->pred8x8l [DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_ssse3;
h->pred8x8l [DIAG_DOWN_RIGHT_PRED ] = ff_pred8x8l_down_right_ssse3;
h->pred8x8l [VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_ssse3;
h->pred8x8l [VERT_LEFT_PRED ] = ff_pred8x8l_vertical_left_ssse3;
h->pred8x8l [HOR_UP_PRED ] = ff_pred8x8l_horizontal_up_ssse3;
h->pred8x8l [HOR_DOWN_PRED ] = ff_pred8x8l_horizontal_down_ssse3;
if (codec_id == AV_CODEC_ID_VP8) {
h->pred8x8 [PLANE_PRED8x8 ] = ff_pred8x8_tm_vp8_ssse3;
h->pred4x4 [TM_VP8_PRED ] = ff_pred4x4_tm_vp8_ssse3;
} else {
if (chroma_format_idc == 1)
h->pred8x8 [PLANE_PRED8x8] = ff_pred8x8_plane_ssse3;
if (codec_id == AV_CODEC_ID_SVQ3) {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_svq3_ssse3;
} else if (codec_id == AV_CODEC_ID_RV40) {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_rv40_ssse3;
} else {
h->pred16x16[PLANE_PRED8x8] = ff_pred16x16_plane_h264_ssse3;
}
}
}
} else if (bit_depth == 10) {
if (mm_flags & AV_CPU_FLAG_MMXEXT) {
h->pred4x4[DC_PRED ] = ff_pred4x4_dc_10_mmxext;
h->pred4x4[HOR_UP_PRED ] = ff_pred4x4_horizontal_up_10_mmxext;
if (chroma_format_idc == 1)
h->pred8x8[DC_PRED8x8 ] = ff_pred8x8_dc_10_mmxext;
h->pred8x8l[DC_128_PRED ] = ff_pred8x8l_128_dc_10_mmxext;
h->pred16x16[DC_PRED8x8 ] = ff_pred16x16_dc_10_mmxext;
h->pred16x16[TOP_DC_PRED8x8 ] = ff_pred16x16_top_dc_10_mmxext;
h->pred16x16[DC_128_PRED8x8 ] = ff_pred16x16_128_dc_10_mmxext;
h->pred16x16[LEFT_DC_PRED8x8 ] = ff_pred16x16_left_dc_10_mmxext;
h->pred16x16[VERT_PRED8x8 ] = ff_pred16x16_vertical_10_mmxext;
h->pred16x16[HOR_PRED8x8 ] = ff_pred16x16_horizontal_10_mmxext;
}
if (mm_flags & AV_CPU_FLAG_SSE2) {
h->pred4x4[DIAG_DOWN_LEFT_PRED ] = ff_pred4x4_down_left_10_sse2;
h->pred4x4[DIAG_DOWN_RIGHT_PRED] = ff_pred4x4_down_right_10_sse2;
h->pred4x4[VERT_LEFT_PRED ] = ff_pred4x4_vertical_left_10_sse2;
h->pred4x4[VERT_RIGHT_PRED ] = ff_pred4x4_vertical_right_10_sse2;
h->pred4x4[HOR_DOWN_PRED ] = ff_pred4x4_horizontal_down_10_sse2;
if (chroma_format_idc == 1) {
h->pred8x8[DC_PRED8x8 ] = ff_pred8x8_dc_10_sse2;
h->pred8x8[TOP_DC_PRED8x8 ] = ff_pred8x8_top_dc_10_sse2;
h->pred8x8[PLANE_PRED8x8 ] = ff_pred8x8_plane_10_sse2;
h->pred8x8[VERT_PRED8x8 ] = ff_pred8x8_vertical_10_sse2;
h->pred8x8[HOR_PRED8x8 ] = ff_pred8x8_horizontal_10_sse2;
}
h->pred8x8l[VERT_PRED ] = ff_pred8x8l_vertical_10_sse2;
h->pred8x8l[HOR_PRED ] = ff_pred8x8l_horizontal_10_sse2;
h->pred8x8l[DC_PRED ] = ff_pred8x8l_dc_10_sse2;
h->pred8x8l[DC_128_PRED ] = ff_pred8x8l_128_dc_10_sse2;
h->pred8x8l[TOP_DC_PRED ] = ff_pred8x8l_top_dc_10_sse2;
h->pred8x8l[DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_10_sse2;
h->pred8x8l[DIAG_DOWN_RIGHT_PRED] = ff_pred8x8l_down_right_10_sse2;
h->pred8x8l[VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_10_sse2;
h->pred8x8l[HOR_UP_PRED ] = ff_pred8x8l_horizontal_up_10_sse2;
h->pred16x16[DC_PRED8x8 ] = ff_pred16x16_dc_10_sse2;
h->pred16x16[TOP_DC_PRED8x8 ] = ff_pred16x16_top_dc_10_sse2;
h->pred16x16[DC_128_PRED8x8 ] = ff_pred16x16_128_dc_10_sse2;
h->pred16x16[LEFT_DC_PRED8x8 ] = ff_pred16x16_left_dc_10_sse2;
h->pred16x16[VERT_PRED8x8 ] = ff_pred16x16_vertical_10_sse2;
h->pred16x16[HOR_PRED8x8 ] = ff_pred16x16_horizontal_10_sse2;
}
if (mm_flags & AV_CPU_FLAG_SSSE3) {
h->pred4x4[DIAG_DOWN_RIGHT_PRED] = ff_pred4x4_down_right_10_ssse3;
h->pred4x4[VERT_RIGHT_PRED ] = ff_pred4x4_vertical_right_10_ssse3;
h->pred4x4[HOR_DOWN_PRED ] = ff_pred4x4_horizontal_down_10_ssse3;
h->pred8x8l[HOR_PRED ] = ff_pred8x8l_horizontal_10_ssse3;
h->pred8x8l[DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_10_ssse3;
h->pred8x8l[DIAG_DOWN_RIGHT_PRED] = ff_pred8x8l_down_right_10_ssse3;
h->pred8x8l[VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_10_ssse3;
h->pred8x8l[HOR_UP_PRED ] = ff_pred8x8l_horizontal_up_10_ssse3;
}
#if HAVE_AVX
if (mm_flags & AV_CPU_FLAG_AVX) {
h->pred4x4[DIAG_DOWN_LEFT_PRED ] = ff_pred4x4_down_left_10_avx;
h->pred4x4[DIAG_DOWN_RIGHT_PRED] = ff_pred4x4_down_right_10_avx;
h->pred4x4[VERT_LEFT_PRED ] = ff_pred4x4_vertical_left_10_avx;
h->pred4x4[VERT_RIGHT_PRED ] = ff_pred4x4_vertical_right_10_avx;
h->pred4x4[HOR_DOWN_PRED ] = ff_pred4x4_horizontal_down_10_avx;
h->pred8x8l[VERT_PRED ] = ff_pred8x8l_vertical_10_avx;
h->pred8x8l[HOR_PRED ] = ff_pred8x8l_horizontal_10_avx;
h->pred8x8l[DC_PRED ] = ff_pred8x8l_dc_10_avx;
h->pred8x8l[TOP_DC_PRED ] = ff_pred8x8l_top_dc_10_avx;
h->pred8x8l[DIAG_DOWN_RIGHT_PRED] = ff_pred8x8l_down_right_10_avx;
h->pred8x8l[DIAG_DOWN_LEFT_PRED ] = ff_pred8x8l_down_left_10_avx;
h->pred8x8l[VERT_RIGHT_PRED ] = ff_pred8x8l_vertical_right_10_avx;
h->pred8x8l[HOR_UP_PRED ] = ff_pred8x8l_horizontal_up_10_avx;
}
#endif
}
#endif
}
| 1threat
|
Get unreadable string in c# : I get this string from the network
$A grQ05Ah@‘)���ÿÿûÿÿ����°#~À‚¡U
But in fact this string is this format :
*HQ,XXXXXX,41,4#V1,time,A,**Lat**,N/S,**Lng**,W/E,000.00,000,date,FFFFFBFF,432,35,32448,334
How can i convert the string to standard format in c# ?
I convert data to byte as you can see :
24-41-20-20-67-72-51-30-35-41-68-40-91-29-3F-3F-3F-FF-FF-FB-FF-FF-3F-3F-3F-3F-B0-23-7E-C0-82-A1-55
| 0debug
|
How do i dynamically add textboxes without refreshing the page and then insert that into the database? :
- I want to be able to dynamically add text fields whenever the button is clicked
- I want to be able to get the data from the text fields and insert it into the database.
- As of now everything else works and I just need this, maybe I can also add a remove button to remove the last text field dynamically also
- the reason why I need this to be dynamic is because I don't want to refresh the page cause all the data will be lost
<html>
<form action="checklist3.php" method="post">
<button type='submit' name='submit' id='buttonParent'>Submit</button>
<button type="submit" name="back">Back</button>
<?php
session_start();
require_once('../mysql_connect.php');
$rowarray=$_SESSION['rowarray'];
$dishname=$_SESSION['dishname'];
echo
'<br><br><br><br><br>
<div class = "Table">
<table border = "2pt solid black" align = "left" cellpadding = "2px"
bordercolor = black>
<tr>
<td width = "7%">
<div align = "left"><b>BRAND NAME</div></b>
</td>
<td width = "7%">
<div align = "left"><b>INGREDIENT</div></b>
</td>
<td width = "3%">
<div align = "left"><b>QUANTITY</div></b>
</td>
<td width = "7%">
<div align = "left"><b>MEASUREMENT</div></b>
</td>
</tr>';
for($x=0;$x<sizeof($rowarray);$x++)
{
$query = "select R.name AS RAWNAME, I.name AS INGREDIENTNAME, R.quantity AS RAWQUANTITY from rawmaterial R JOIN ingredient I ON R.ingredient_id = I.ingredient_id where R.rawmaterial_id='{$rowarray[$x]}'";
$res = mysqli_query($dbc, $query);
while($fetch = mysqli_fetch_array($res, MYSQL_ASSOC))
{
echo "<tr>
<td width=\"7%\">
<div align=\"left\">{$fetch['RAWNAME']}</div>
</td>
<td width=\"3%\">
<div align=\"left\">{$fetch['INGREDIENTNAME']}</div>
</td>
<td width=\"3%\">
<div align=\"left\"><input type='name' name='quantity[]' placeholder={$fetch['RAWQUANTITY']}></input></div>
</td>
<td width=\"7%\">
<div align=\"left\">";
echo "<select class='measure[]' name = 'measure[]'>";
$mesr = mysqli_query($dbc, 'select measure from measure_ref');
while($row=mysqli_fetch_array($mesr,MYSQLI_ASSOC))
{
$mes=$row['measure'];
echo '<option value ='.$mes.'>'.$mes.'</option>';
}
echo "</select>";
echo" </div>";
echo"</td>";
echo"</tr>";
}
}
echo '</table></div>';
?>
<?php
if(isset($_POST['submit']))
{
$quantarray=array();
$quant=$_POST['quantity'];
$row2=array();
foreach($quant as $row2)
{
array_push($quantarray,$row2);
}
//error checking for negative quantities
for($x=0;$x<sizeof($quantarray);$x++)
{
$select="select quantity from rawmaterial where rawmaterial_id='{$rowarray[$x]}'";
$resselect=mysqli_query($dbc,$select);
while($row=mysqli_fetch_array($resselect,MYSQLI_ASSOC))
{
$q=$row['quantity'];
if($q-$quantarray[$x]<0)
{
$quantarray=array();
$row2=array();
echo 'insufficient quantity';
}
}
}
$messarray=array();
$mes=$_POST['measure'];
$row=array();
foreach($mes as $row)
{
array_push($messarray,$row);
}
print_r($messarray);
print_r($quantarray);
$size=sizeof($messarray);
for($x=0;$x<$size;$x++)
{
$getquant="select quantity from rawmaterial where rawmaterial_id='{$rowarray[$x]}'";
$resquant=mysqli_query($dbc,$getquant);
$row = mysqli_fetch_array($resquant, MYSQL_ASSOC);
$newquant=$row['quantity']-$quantarray[$x];
echo $newquant;
$q2="UPDATE rawmaterial set measure='{$messarray[$x]}', quantity='{$newquant}' where rawmaterial_id='{$rowarray[$x]}'";
$res2=mysqli_query($dbc,$q2);
$select="select * from rawmaterial where rawmaterial_id='{$rowarray[$x]}'";
$resselect=mysqli_query($dbc,$select);
$fetch2 = mysqli_fetch_array($resselect, MYSQL_ASSOC);
$rmid=$fetch2['rawmaterial_id'];
$getrecipeid="select recipe_id from recipe where recipe_name='{$dishname}'";
$recipeid=mysqli_query($dbc,$getrecipeid);
$fetch3 = mysqli_fetch_array($recipeid, MYSQL_ASSOC);
$recid=$fetch3['recipe_id'];
$q="insert into recipe_items (recipe_id, rawmaterial_id,quantity,measure) values('{$recid}','{$rmid}', '{$quantarray[$x]}' , '{$messarray[$x]}')";
$result = mysqli_query($dbc, $q);
}
}
echo '<br>';
echo "Add Procedures";
echo '<a href="#" id="plus_row">Add Row</a>';<---------------- This is the add button
echo "<input type='button' value='Remove Button' id='removeButton'>";
echo "<body>";
echo "</body>";
if(isset($_POST['home']))
{
header("Location: http://".$_SERVER['HTTP_HOST']. dirname($_SERVER['PHP_SELF'])."/chefmenu.php");
}
if(isset($_POST['back']))
{
header("Location: http://".$_SERVER['HTTP_HOST']. dirname($_SERVER['PHP_SELF'])."/checklist2.php");
}
if(isset($_POST['add']))
{
[THIS IS WHERE THE CODE SHOULD BE]
}
?>
</form>
</html>
| 0debug
|
c++ include guards don't work, error : <p>When I compile this code, i get an error "Error LNK2005 "int a" (?a@@3HA) already defined in file.obj
code:
main.cpp:</p>
<pre><code>#include "header.h"
int main()
{
return 0;
}
</code></pre>
<p>file.cpp:</p>
<pre><code>#include "header.h"
void function()
{
}
</code></pre>
<p>header.h:</p>
<pre><code>#ifndef HEADER
#define HEADER
int a;
#endif
</code></pre>
<p>Thanks in advance</p>
| 0debug
|
How to make ® become uppercase? : I try to put ® behind a brand in mysql data, let say Apple®, in code it is uppercase, but when it display to html, it become normal. I want to display it in uppercase in html, how can I do that? CSS? JS?? Or is there any unicode that display in directly to uppercase??
| 0debug
|
How solve the matrix equations in Python/Matlab? : <p>I plan to write a Python/Matlab code to solve the matrix equation system in <a href="https://math.stackexchange.com/questions/2402866/how-to-obtain-an-solution-of-the-following-matrix-equation-system">https://math.stackexchange.com/questions/2402866/how-to-obtain-an-solution-of-the-following-matrix-equation-system</a></p>
<p>Is there anyway to write the equations in matrix form for Python or Matlab to solve?
Thanks.</p>
| 0debug
|
static void cirrus_init_common(CirrusVGAState * s, int device_id, int is_pci)
{
int i;
static int inited;
if (!inited) {
inited = 1;
for(i = 0;i < 256; i++)
rop_to_index[i] = CIRRUS_ROP_NOP_INDEX;
rop_to_index[CIRRUS_ROP_0] = 0;
rop_to_index[CIRRUS_ROP_SRC_AND_DST] = 1;
rop_to_index[CIRRUS_ROP_NOP] = 2;
rop_to_index[CIRRUS_ROP_SRC_AND_NOTDST] = 3;
rop_to_index[CIRRUS_ROP_NOTDST] = 4;
rop_to_index[CIRRUS_ROP_SRC] = 5;
rop_to_index[CIRRUS_ROP_1] = 6;
rop_to_index[CIRRUS_ROP_NOTSRC_AND_DST] = 7;
rop_to_index[CIRRUS_ROP_SRC_XOR_DST] = 8;
rop_to_index[CIRRUS_ROP_SRC_OR_DST] = 9;
rop_to_index[CIRRUS_ROP_NOTSRC_OR_NOTDST] = 10;
rop_to_index[CIRRUS_ROP_SRC_NOTXOR_DST] = 11;
rop_to_index[CIRRUS_ROP_SRC_OR_NOTDST] = 12;
rop_to_index[CIRRUS_ROP_NOTSRC] = 13;
rop_to_index[CIRRUS_ROP_NOTSRC_OR_DST] = 14;
rop_to_index[CIRRUS_ROP_NOTSRC_AND_NOTDST] = 15;
s->device_id = device_id;
if (is_pci)
s->bustype = CIRRUS_BUSTYPE_PCI;
else
s->bustype = CIRRUS_BUSTYPE_ISA;
}
register_ioport_write(0x3c0, 16, 1, vga_ioport_write, s);
register_ioport_write(0x3b4, 2, 1, vga_ioport_write, s);
register_ioport_write(0x3d4, 2, 1, vga_ioport_write, s);
register_ioport_write(0x3ba, 1, 1, vga_ioport_write, s);
register_ioport_write(0x3da, 1, 1, vga_ioport_write, s);
register_ioport_read(0x3c0, 16, 1, vga_ioport_read, s);
register_ioport_read(0x3b4, 2, 1, vga_ioport_read, s);
register_ioport_read(0x3d4, 2, 1, vga_ioport_read, s);
register_ioport_read(0x3ba, 1, 1, vga_ioport_read, s);
register_ioport_read(0x3da, 1, 1, vga_ioport_read, s);
s->vga_io_memory = cpu_register_io_memory(0, cirrus_vga_mem_read,
cirrus_vga_mem_write, s);
cpu_register_physical_memory(isa_mem_base + 0x000a0000, 0x20000,
s->vga_io_memory);
qemu_register_coalesced_mmio(isa_mem_base + 0x000a0000, 0x20000);
s->cirrus_linear_io_addr =
cpu_register_io_memory(0, cirrus_linear_read, cirrus_linear_write, s);
s->cirrus_linear_write = cpu_get_io_memory_write(s->cirrus_linear_io_addr);
s->cirrus_linear_bitblt_io_addr =
cpu_register_io_memory(0, cirrus_linear_bitblt_read,
cirrus_linear_bitblt_write, s);
s->cirrus_mmio_io_addr =
cpu_register_io_memory(0, cirrus_mmio_read, cirrus_mmio_write, s);
s->real_vram_size =
(s->device_id == CIRRUS_ID_CLGD5446) ? 4096 * 1024 : 2048 * 1024;
s->cirrus_addr_mask = s->real_vram_size - 1;
s->linear_mmio_mask = s->real_vram_size - 256;
s->get_bpp = cirrus_get_bpp;
s->get_offsets = cirrus_get_offsets;
s->get_resolution = cirrus_get_resolution;
s->cursor_invalidate = cirrus_cursor_invalidate;
s->cursor_draw_line = cirrus_cursor_draw_line;
qemu_register_reset(cirrus_reset, s);
cirrus_reset(s);
register_savevm("cirrus_vga", 0, 2, cirrus_vga_save, cirrus_vga_load, s);
}
| 1threat
|
static void process_ncq_command(AHCIState *s, int port, uint8_t *cmd_fis,
int slot)
{
AHCIDevice *ad = &s->dev[port];
IDEState *ide_state = &ad->port.ifs[0];
NCQFrame *ncq_fis = (NCQFrame*)cmd_fis;
uint8_t tag = ncq_fis->tag >> 3;
NCQTransferState *ncq_tfs = &ad->ncq_tfs[tag];
size_t size;
if (ncq_tfs->used) {
fprintf(stderr, "%s: tag %d already used\n", __FUNCTION__, tag);
return;
}
ncq_tfs->used = 1;
ncq_tfs->drive = ad;
ncq_tfs->slot = slot;
ncq_tfs->cmd = ncq_fis->command;
ncq_tfs->lba = ((uint64_t)ncq_fis->lba5 << 40) |
((uint64_t)ncq_fis->lba4 << 32) |
((uint64_t)ncq_fis->lba3 << 24) |
((uint64_t)ncq_fis->lba2 << 16) |
((uint64_t)ncq_fis->lba1 << 8) |
(uint64_t)ncq_fis->lba0;
ncq_tfs->tag = tag;
if (tag != slot) {
DPRINTF(port, "Warn: NCQ slot (%d) did not match the given tag (%d)\n",
slot, tag);
}
if (ncq_fis->aux0 || ncq_fis->aux1 || ncq_fis->aux2 || ncq_fis->aux3) {
DPRINTF(port, "Warn: Attempt to use NCQ auxiliary fields.\n");
}
if (ncq_fis->prio || ncq_fis->icc) {
DPRINTF(port, "Warn: Unsupported attempt to use PRIO/ICC fields\n");
}
if (ncq_fis->fua & NCQ_FIS_FUA_MASK) {
DPRINTF(port, "Warn: Unsupported attempt to use Force Unit Access\n");
}
if (ncq_fis->tag & NCQ_FIS_RARC_MASK) {
DPRINTF(port, "Warn: Unsupported attempt to use Rebuild Assist\n");
}
ncq_tfs->sector_count = ((uint16_t)ncq_fis->sector_count_high << 8) |
ncq_fis->sector_count_low;
size = ncq_tfs->sector_count * 512;
ahci_populate_sglist(ad, &ncq_tfs->sglist, size, 0);
if (ncq_tfs->sglist.size < size) {
error_report("ahci: PRDT length for NCQ command (0x%zx) "
"is smaller than the requested size (0x%zx)",
ncq_tfs->sglist.size, size);
qemu_sglist_destroy(&ncq_tfs->sglist);
ncq_err(ncq_tfs);
ahci_trigger_irq(ad->hba, ad, PORT_IRQ_OVERFLOW);
return;
} else if (ncq_tfs->sglist.size != size) {
DPRINTF(port, "Warn: PRDTL (0x%zx)"
" does not match requested size (0x%zx)",
ncq_tfs->sglist.size, size);
}
DPRINTF(port, "NCQ transfer LBA from %"PRId64" to %"PRId64", "
"drive max %"PRId64"\n",
ncq_tfs->lba, ncq_tfs->lba + ncq_tfs->sector_count - 1,
ide_state->nb_sectors - 1);
switch (ncq_tfs->cmd) {
case READ_FPDMA_QUEUED:
DPRINTF(port, "NCQ reading %d sectors from LBA %"PRId64", "
"tag %d\n",
ncq_tfs->sector_count, ncq_tfs->lba, ncq_tfs->tag);
DPRINTF(port, "tag %d aio read %"PRId64"\n",
ncq_tfs->tag, ncq_tfs->lba);
dma_acct_start(ide_state->blk, &ncq_tfs->acct,
&ncq_tfs->sglist, BLOCK_ACCT_READ);
ncq_tfs->aiocb = dma_blk_read(ide_state->blk,
&ncq_tfs->sglist, ncq_tfs->lba,
ncq_cb, ncq_tfs);
break;
case WRITE_FPDMA_QUEUED:
DPRINTF(port, "NCQ writing %d sectors to LBA %"PRId64", tag %d\n",
ncq_tfs->sector_count, ncq_tfs->lba, ncq_tfs->tag);
DPRINTF(port, "tag %d aio write %"PRId64"\n",
ncq_tfs->tag, ncq_tfs->lba);
dma_acct_start(ide_state->blk, &ncq_tfs->acct,
&ncq_tfs->sglist, BLOCK_ACCT_WRITE);
ncq_tfs->aiocb = dma_blk_write(ide_state->blk,
&ncq_tfs->sglist, ncq_tfs->lba,
ncq_cb, ncq_tfs);
break;
default:
if (is_ncq(cmd_fis[2])) {
DPRINTF(port,
"error: unsupported NCQ command (0x%02x) received\n",
cmd_fis[2]);
} else {
DPRINTF(port,
"error: tried to process non-NCQ command as NCQ\n");
}
qemu_sglist_destroy(&ncq_tfs->sglist);
ncq_err(ncq_tfs);
}
}
| 1threat
|
static void usage(const char *cmd)
{
printf(
"Usage: %s [-m <method> -p <path>] [<options>]\n"
"QEMU Guest Agent %s\n"
"\n"
" -m, --method transport method: one of unix-listen, virtio-serial, or\n"
" isa-serial (virtio-serial is the default)\n"
" -p, --path device/socket path (the default for virtio-serial is:\n"
" %s)\n"
" -l, --logfile set logfile path, logs to stderr by default\n"
" -f, --pidfile specify pidfile (default is %s)\n"
" -v, --verbose log extra debugging information\n"
" -V, --version print version information and exit\n"
" -d, --daemonize become a daemon\n"
#ifdef _WIN32
" -s, --service service commands: install, uninstall\n"
#endif
" -b, --blacklist comma-separated list of RPCs to disable (no spaces, \"?\"\n"
" to list available RPCs)\n"
" -h, --help display this help and exit\n"
"\n"
"Report bugs to <mdroth@linux.vnet.ibm.com>\n"
, cmd, QGA_VERSION, QGA_VIRTIO_PATH_DEFAULT, QGA_PIDFILE_DEFAULT);
}
| 1threat
|
¿HOW CAN I SELECT DATA FROM DIFERENT TABLES? MYSQL : I have these tables:
ARTICLE, ARTICLE_has_tag, TAG, POST_ARTICLE and USER.
I would like to take all from ARTICLE, then the name of one tag (even that they have more than one), the number of comments and the user who made it. ¿Is it possible? ¿Should I do more than one query?
ARTICLE
`ID_ARTICLE` int(11) NOT NULL AUTO_INCREMENT,
`TITLE_ARTICLE` varchar(45) NOT NULL,
`SUBTITLE_ARTICLE` varchar(45) NOT NULL,
`REFEREE_ARTICLE` text,
`LINEUP_ARTICLE` text,
`LINEUP_OPPONENT_ARTICLE` text,
`CARD_ARTICLE` text,
`CARD_OPPONENT_ARTICLE` text,
`CHANGE_ARTICLE` text,
`CHANGE_OPPONENT_ARTICLE` text,
`GOALS_ARTICLE` text,
`CONTENT_ARTICLE` text NOT NULL,
`CREATED_ARTICLE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`IMAGE_ARTICLE` text,
`MATCH_ID` int(11) NOT NULL,
`USER_ID` int(11) NOT NULL,
PRIMARY KEY (`ID_ARTICLE`),
ARTICLE_has_TAG
`ARTICLE_ID` int(11) NOT NULL,
`TAG_ID` int(11) NOT NULL,
POST_ARTICLE
`ID_POST_ARTICLE` int(11) NOT NULL AUTO_INCREMENT,
`CONTENT_POST_ARTICLE` text NOT NULL,
`CREATED_POST_ARTICLE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`USER_ID` int(11) NOT NULL,
`ARTICLE_ID` int(11) NOT NULL,
TAG
`ID_TAG` int(11) NOT NULL AUTO_INCREMENT,
`NAME_TAG` text NOT NULL,
USER
`ID_USER` int(11) NOT NULL AUTO_INCREMENT,
`USERNAME_USER` text NOT NULL,
`FIRSTNAME_USER` varchar(45) NOT NULL,
`LASTNAME_USER` varchar(45) NOT NULL,
`EMAIL_USER` text NOT NULL,
`PASSWORD_USER` text NOT NULL,
`TYPE_USER` int(1) NOT NULL DEFAULT '1',
`IMAGE_USER` varchar(100) DEFAULT '245x342.jpg',
`KEY_USER` text NOT NULL,
`ACTIVATED_USER` int(1) DEFAULT '0',
`CREATED_USER` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
| 0debug
|
switch column values : Hey guys i have 2 tables like this
MasterEntries with columns:
**ParishName AverageMark**
Hanover 50.00
Manchester 65.00
Andrew 70.00
MasterScoreSheet with columns :
**Hanover Manchester [St.Andrew]**
50.00 65.00 70.00
i would like AverageMark column values from MasterEntries to become the values for the respective columns like this
**Hanover Manchester Andrew**
50.00 65.00 70.00
how can i get this done?.
Please assist . Thanks
| 0debug
|
static void use_high_update_speed(WmallDecodeCtx *s, int ich)
{
int ilms, recent, icoef;
s->update_speed[ich] = 16;
for (ilms = s->cdlms_ttl[ich]; ilms >= 0; ilms--) {
recent = s->cdlms[ich][ilms].recent;
if (s->bV3RTM) {
for (icoef = 0; icoef < s->cdlms[ich][ilms].order; icoef++)
s->cdlms[ich][ilms].lms_updates[icoef + recent] *= 2;
} else {
for (icoef = 0; icoef < s->cdlms[ich][ilms].order; icoef++)
s->cdlms[ich][ilms].lms_updates[icoef] *= 2;
}
}
}
| 1threat
|
why use diffrent date format makes diffrent result in same date? : i try to use current date in date formats but when i use diffrent date formats this makes diffrent results..at first i used this code:
private String getTodayDateString() {
Calendar cal = Calendar.getInstance();
int month=cal.get(Calendar.MONTH);
return Integer.toString(month);
}
and this return me 5 for result for month.
but when i use this code:
private String getTodayDateString2() {
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Calendar cal = Calendar.getInstance();
return dateFormat.format(cal.getTime());
}
function returns me 14/6/2016 and this means month is calculated 6 in this dateformat.why?where is the problem?
| 0debug
|
Incorrect frame when dismissing modally presented view controller : <p>I am presenting a <code>UIViewController</code> using a custom transition and a custom <code>UIPresentationController</code>. The view controller's view does not cover the entire screen, so the presenting view controller is still visible.</p>
<p>Next, I present an instance of <code>UIImagePickerController</code> on top of this view controller. The problem is that when I dismiss the image picker, the presenting view controller's frame covers the entire screen instead of just the portion I want it to cover. The frame specified by <code>frameOfPresentedViewInContainerView</code> in my custom <code>UIPresentationController</code> seems to be completely ignored.</p>
<p>Only if present the image picker with a <code>modalPresentationStyle</code> of <code>UIModalPresentationOverCurrentContext</code> my frames remain intact (which makes sense since no views are removed from the view hierarchy in the first place). Unfortunately that's not what I want. I want the image picker to be presented full screen, which - for whatever reason - seems to mess up my layout. Anything that I might be doing wrong or forgetting here? Any suggestions?</p>
| 0debug
|
can anyone explain the meaning of this peace of code from c : <p>the code is from C the code is written below:-</p>
<pre><code>int main(){
char* time = (char *)malloc(10240 * sizeof(char));
scanf("%s",time);
return 0;
}
</code></pre>
| 0debug
|
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
| 1threat
|
static inline uint64_t ram_chunk_index(const uint8_t *start,
const uint8_t *host)
{
return ((uintptr_t) host - (uintptr_t) start) >> RDMA_REG_CHUNK_SHIFT;
}
| 1threat
|
fiter array os objects based on date : i want to display the objects based on current month
my code is
for (let m of this.monthEvent){
m=new Date();
if(m.getMonth() === new Date().getMonth()){
this.thisMonth=m;
}
return this.thisMonth;
my array is
[ 0:{title: "", date: "2018-03-29"}
1:{title: "", date: "2018-04-13"}
2:{title: "", date: "2018-04-12"}
3:{title: "leave", date: "2018-04-11"}
4:{title: "", date: "2018-04-16"}]
it show m.getTime is not a function
i want to display the current month events
thanks in advance
| 0debug
|
How to use Scrollspy & Affix in Angular 2 : <p>I noticed that you cannot use in angular 2 components bootstrap feature like <code>data-spy="affix"</code> </p>
<p>Does anyone know how to use affix and scrollspy in angular 2? (<a href="http://www.w3schools.com/bootstrap/tryit.asp?filename=trybs_scrollspy_affix&stacked=h" rel="noreferrer">Example</a>)</p>
| 0debug
|
void wdt_i6300esb_init(void)
{
watchdog_add_model(&model);
}
| 1threat
|
Javascript OnClick Button fonction not working : I'm trying to edit some code in order to execute a function once someone press on a button. For example, https://jsfiddle.net/h0eks0vq/12/
## the javascript ##
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
/* Get the button, and when the user clicks on it, execute myFunction */
document.getElementById("buttons-services").onclick = function() {myFunction()};
/* myFunction toggles between adding and removing the show class, which is used to hide and show the dropdown content */
function myFunction() {
document.getElementById("myDropdown").classList.toggle("show");
}
<!-- end snippet -->
## My HTML (here's one button out of 4) ##
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<div id="buttons-services">
<button onclick="myFunction()" id="myButton10" title="Cliquez pour une description du service offert" >Assemblage</button>
<div id="myDropdown" class="dropdown-content">
<p id="demo">
<!-- end snippet -->
The problem that i'm having is that the text showed when the function is executed is always the same despite having some different text for the different buttons. Hopefully someone with more experience can point me in the right direction.
| 0debug
|
Eclipse says "You have an error in your sql syntax", but the same query works in workbench. Where is the problem? : <p>I have this preprared statement that doesn't works in ecplise, but for workbench is ok, what can i do?</p>
<pre><code>public void aggiornaArma(Arma Armi) throws SQLException {
PreparedStatement myStmt = null;
try {
myStmt=myConn.prepareStatement("update arma"
+"set danni=?,Descrizione=?,costo=?,impugnatura=?,tipo=?"
+" where nome=?");
myStmt.setInt(1, Armi.getDanni());
myStmt.setString(2,Armi.getDescrizione());
myStmt.setInt(3,Armi.getCosto());
myStmt.setString(4, Armi.getImpugnatura());
myStmt.setString(5,Armi.getTipo());
myStmt.setString(6,Armi.getNome());
myStmt.executeUpdate();
}
</code></pre>
| 0debug
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.