problem stringlengths 26 131k | labels class label 2 classes |
|---|---|
AWS: can't connect to RDS database from my machine : <p>The EC2 instance/live web can connect just fine to the RDS database. But when I want to debug the code in my local machine, I can't connect to the database and got this error:</p>
<blockquote>
<p>OperationalError: (2003, "Can't connect to MySQL server on 'aa9jliuygesv4w.c03i1
ck3o0us.us-east-1.rds.amazonaws.com' (10060)")</p>
</blockquote>
<p>I've added <code>.pem</code> and <code>.ppk</code> keys to <code>.ssh</code> and I already configure EB CLI. I don't know what should I do anymore.</p>
<p>FYI: The app is in Django</p>
| 0debug |
Type '() => void' is not assignable to type '() => {}' : <p>I understand the error message:</p>
<blockquote>
<p>Type '() => void' is not assignable to type '() => {}'</p>
</blockquote>
<p>Well sort of, it is telling me there is a type casting issue. However I can't work out why the compiler thinks the types are not the same.</p>
<p>The back ground to the code is that I have a typescript class that is given a function and then stores it as a member. I want to be able to initialise the member with an empty 'noop' function so that it don't have to null check it before use.</p>
<p>I have managed to reduce problem down to the following example test code:</p>
<pre><code>export class Test {
private _noop: () => {};
constructor(
) {
this._noop = () => { }; //I guess the compiler thinks this is returning in a new empty object using the json syntax
this._noop = this.noop; //I would have thought this shoud definitely work
this._noop = () => undefined; //This does works
}
public noop(): void {
//Nothing to see here...
}
}
</code></pre>
<p>The three statements in the constructor are all intended to do the same job: initialise the member with a no operation function. However only the last statement works:</p>
<pre><code>this._noop = () => undefined;
</code></pre>
<p>The other two statements produce the compile error.</p>
<p>Does any one know why the compiler can't seem to match the types?</p>
| 0debug |
How do I structure authenticated queries with GraphQL? : <p>I was thinking of writing an API that does the following things:</p>
<ul>
<li>Sign-up and sign-in users which provide the user with an authentication token</li>
<li>Create maps (data example: <code>{ name: “Quotes”, attributes: [“quote”, “author"] }</code>)</li>
<li>Create map items (data example: <code>{ quote: "...", author: "..." }</code>)</li>
</ul>
<p>I would build the queries somewhat like this:</p>
<pre><code>// return the name and id of all the user's maps
maps(authToken="…") {
name,
id
}
// return all the items of a single map
maps(authToken="…") {
map(name=“Quotes") {
items
}
}
// OR by using the map_id
maps(authToken="…") {
map(id=“…") {
items
}
}
</code></pre>
<p><strong>So, my question is, is this correct or would I need to structure it differently?</strong></p>
| 0debug |
Filtering specific column in Angular Material table in angular 5 : <p>In Angular material official website it is mentioned that filterPredicate: ((data: T, filter: string) => boolean) will filter data based on specific field. But don't getting how to start. </p>
<p>I have seen example but not getting:-<a href="https://stackblitz.com/edit/angular-material2-table?file=app%2Fapp.component.html" rel="noreferrer">https://stackblitz.com/edit/angular-material2-table?file=app%2Fapp.component.html</a></p>
<p>By default it filter based on whole object but i want to search only based on single property of json.</p>
| 0debug |
static void spapr_rng_class_init(ObjectClass *oc, void *data)
{
DeviceClass *dc = DEVICE_CLASS(oc);
dc->realize = spapr_rng_realize;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
dc->props = spapr_rng_properties;
} | 1threat |
Regular Expression for " - " and characters after : <p>This is a question for experts on regular expressions, since it is something that I dont have much insight.</p>
<p>Its not C#, java specific, its a general regular expression that I need to put in one application that will rename files.</p>
<p>Basically I have structures of folders like this.</p>
<pre><code>1 - I went to the cinema 1
I went to the cinema 1 - movie title 1
I went to the cinema 1 - movie title 2
I went to the cinema 1 - movie title 3
I went to the cinema 1 - movie title 4
2 - I went to the cinema 2
3 - I went to the cinema 3
</code></pre>
<p>I need an expression that pretty much returns the text after " - " because everything is before is the parent folder name.</p>
<p>May be a simple question but I did some search and I can't find it.</p>
<p>Thanks</p>
| 0debug |
Why does npx install webpack every time? : <p>I have a JavaScript app I'm bundling with webpack. Per the docs, I'm using this command to start bundling:</p>
<pre><code>npx webpack
</code></pre>
<p>Each time I get this output:</p>
<pre><code>npx: installed 1 in 2.775s
</code></pre>
<p>I've verified that the webpack command exists in my <code>./node_modules/.bin</code> directory where npx is looking. Can anyone think of why it's downloading webpack every time? It can take up to 7 seconds to complete this step, which is slowing down my builds.</p>
| 0debug |
Does setting numpy arrays to None free memory? : <p>I have hundreds of really larges matrices, like (600, 800) or (3, 600, 800) shape'd ones.</p>
<p>Therefore I want to de-allocate the memory used as soon as I don't really need something anymore.</p>
<p>I thought:</p>
<pre><code>some_matrix = None
</code></pre>
<p>Should do the job, or is just the reference set to None but somewhere in the Memory the space still allocated? (like preserving the allocated space for some re-initialization of <code>some_matrix</code> in the future)</p>
<p>Additionally: sometimes I am slicing through the matrices, calculated something and put the values into a buffer (a list, because it gets appended all the time). So setting a list to None will definitely free the memory, right?</p>
<p>Or does some kind of <code>unset()</code> method exist where whole identifiers plus its referenced objects are "deleted"?</p>
| 0debug |
How to create tuple with a loop in python : <p>I want to create this tuple:</p>
<pre><code>a=(1,1,1),(2,2,2),(3,3,3),(4,4,4),(5,5,5),(6,6,6),(7,7,7),(8,8,8),(9,9,9)
</code></pre>
<p>I tried with this</p>
<pre><code>a=1,1,1
for i in range (2,10):
a=a,(i,i,i)
</code></pre>
<p>However it creates a tuple inside other tuple in each iteration.</p>
<p>Thank you</p>
| 0debug |
Java ternary operator with negative variable : Can anyone tells working of negative variables in java?
public class Ternary {
public static void main(String[] args) {
int i,k;
i=-10;
k=i<0?-i:i;
System.out.print(i+"is"+k);
}
output:-10 is 10
Can anyone tells internal working of the variables in this scenario.How variable acts negatively?Thanks in advance! | 0debug |
int get_segment32(CPUPPCState *env, mmu_ctx_t *ctx,
target_ulong eaddr, int rw, int type)
{
hwaddr hash;
target_ulong vsid;
int ds, pr, target_page_bits;
int ret, ret2;
target_ulong sr, pgidx;
pr = msr_pr;
ctx->eaddr = eaddr;
sr = env->sr[eaddr >> 28];
ctx->key = (((sr & 0x20000000) && (pr != 0)) ||
((sr & 0x40000000) && (pr == 0))) ? 1 : 0;
ds = sr & 0x80000000 ? 1 : 0;
ctx->nx = sr & 0x10000000 ? 1 : 0;
vsid = sr & 0x00FFFFFF;
target_page_bits = TARGET_PAGE_BITS;
LOG_MMU("Check segment v=" TARGET_FMT_lx " %d " TARGET_FMT_lx " nip="
TARGET_FMT_lx " lr=" TARGET_FMT_lx
" ir=%d dr=%d pr=%d %d t=%d\n",
eaddr, (int)(eaddr >> 28), sr, env->nip, env->lr, (int)msr_ir,
(int)msr_dr, pr != 0 ? 1 : 0, rw, type);
pgidx = (eaddr & ~SEGMENT_MASK_256M) >> target_page_bits;
hash = vsid ^ pgidx;
ctx->ptem = (vsid << 7) | (pgidx >> 10);
LOG_MMU("pte segment: key=%d ds %d nx %d vsid " TARGET_FMT_lx "\n",
ctx->key, ds, ctx->nx, vsid);
ret = -1;
if (!ds) {
if (type != ACCESS_CODE || ctx->nx == 0) {
LOG_MMU("htab_base " TARGET_FMT_plx " htab_mask " TARGET_FMT_plx
" hash " TARGET_FMT_plx "\n",
env->htab_base, env->htab_mask, hash);
ctx->hash[0] = hash;
ctx->hash[1] = ~hash;
ctx->raddr = (hwaddr)-1ULL;
LOG_MMU("0 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
" vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
" hash=" TARGET_FMT_plx "\n",
env->htab_base, env->htab_mask, vsid, ctx->ptem,
ctx->hash[0]);
ret = find_pte32(env, ctx, 0, rw, type, target_page_bits);
if (ret < 0) {
LOG_MMU("1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
" vsid=" TARGET_FMT_lx " api=" TARGET_FMT_lx
" hash=" TARGET_FMT_plx "\n", env->htab_base,
env->htab_mask, vsid, ctx->ptem, ctx->hash[1]);
ret2 = find_pte32(env, ctx, 1, rw, type,
target_page_bits);
if (ret2 != -1) {
ret = ret2;
}
}
#if defined(DUMP_PAGE_TABLES)
if (qemu_log_enabled()) {
hwaddr curaddr;
uint32_t a0, a1, a2, a3;
qemu_log("Page table: " TARGET_FMT_plx " len " TARGET_FMT_plx
"\n", sdr, mask + 0x80);
for (curaddr = sdr; curaddr < (sdr + mask + 0x80);
curaddr += 16) {
a0 = ldl_phys(curaddr);
a1 = ldl_phys(curaddr + 4);
a2 = ldl_phys(curaddr + 8);
a3 = ldl_phys(curaddr + 12);
if (a0 != 0 || a1 != 0 || a2 != 0 || a3 != 0) {
qemu_log(TARGET_FMT_plx ": %08x %08x %08x %08x\n",
curaddr, a0, a1, a2, a3);
}
}
}
#endif
} else {
LOG_MMU("No access allowed\n");
ret = -3;
}
} else {
target_ulong sr;
LOG_MMU("direct store...\n");
sr = env->sr[eaddr >> 28];
if ((sr & 0x1FF00000) >> 20 == 0x07f) {
ctx->raddr = ((sr & 0xF) << 28) | (eaddr & 0x0FFFFFFF);
ctx->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
return 0;
}
switch (type) {
case ACCESS_INT:
break;
case ACCESS_CODE:
return -4;
case ACCESS_FLOAT:
return -4;
case ACCESS_RES:
return -4;
case ACCESS_CACHE:
ctx->raddr = eaddr;
return 0;
case ACCESS_EXT:
return -4;
default:
qemu_log("ERROR: instruction should not need "
"address translation\n");
return -4;
}
if ((rw == 1 || ctx->key != 1) && (rw == 0 || ctx->key != 0)) {
ctx->raddr = eaddr;
ret = 2;
} else {
ret = -2;
}
}
return ret;
}
| 1threat |
One-line sytaxe to Read and return file on python using with loop : I need to read a file and return resulat: this is the syntaxe I use
return json.loads(with open(file, 'r') as f: f.read())
I know that we cannot write with open loop in one line, so I look for the correct syntax to fix that | 0debug |
void helper_msa_st_df(CPUMIPSState *env, uint32_t df, uint32_t wd, uint32_t rs,
int32_t s10)
{
wr_t *pwd = &(env->active_fpu.fpr[wd].wr);
target_ulong addr = env->active_tc.gpr[rs] + (s10 << df);
int i;
switch (df) {
case DF_BYTE:
for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {
do_sb(env, addr + (i << DF_BYTE), pwd->b[i],
env->hflags & MIPS_HFLAG_KSU);
}
break;
case DF_HALF:
for (i = 0; i < DF_ELEMENTS(DF_HALF); i++) {
do_sh(env, addr + (i << DF_HALF), pwd->h[i],
env->hflags & MIPS_HFLAG_KSU);
}
break;
case DF_WORD:
for (i = 0; i < DF_ELEMENTS(DF_WORD); i++) {
do_sw(env, addr + (i << DF_WORD), pwd->w[i],
env->hflags & MIPS_HFLAG_KSU);
}
break;
case DF_DOUBLE:
for (i = 0; i < DF_ELEMENTS(DF_DOUBLE); i++) {
do_sd(env, addr + (i << DF_DOUBLE), pwd->d[i],
env->hflags & MIPS_HFLAG_KSU);
}
break;
}
}
| 1threat |
Retain backslash in Ruby gsub regex : I have a regex which replaces space and forward slash with underscore symbol. But it also replace backslash in the string
"hello\h /123".gsub(/[\s+\/]/, "_")
=> "helloh__123"
How do I retain backslash and replace only space and forward slash? | 0debug |
static int mpc8544_load_device_tree(CPUPPCState *env,
target_phys_addr_t addr,
target_phys_addr_t ramsize,
target_phys_addr_t initrd_base,
target_phys_addr_t initrd_size,
const char *kernel_cmdline)
{
int ret = -1;
uint32_t mem_reg_property[] = {0, cpu_to_be32(ramsize)};
int fdt_size;
void *fdt;
uint8_t hypercall[16];
uint32_t clock_freq = 400000000;
uint32_t tb_freq = 400000000;
int i;
char compatible[] = "MPC8544DS\0MPC85xxDS";
char model[] = "MPC8544DS";
char soc[128];
char ser0[128];
char ser1[128];
char mpic[128];
uint32_t mpic_ph;
char gutil[128];
char pci[128];
uint32_t pci_map[7 * 8];
uint32_t pci_ranges[12] = { 0x2000000, 0x0, 0xc0000000, 0xc0000000, 0x0,
0x20000000, 0x1000000, 0x0, 0x0, 0xe1000000,
0x0, 0x10000 };
QemuOpts *machine_opts;
const char *dumpdtb = NULL;
fdt = create_device_tree(&fdt_size);
if (fdt == NULL) {
goto out;
}
qemu_devtree_setprop_string(fdt, "/", "model", model);
qemu_devtree_setprop(fdt, "/", "compatible", compatible,
sizeof(compatible));
qemu_devtree_setprop_cell(fdt, "/", "#address-cells", 1);
qemu_devtree_setprop_cell(fdt, "/", "#size-cells", 1);
qemu_devtree_add_subnode(fdt, "/memory");
qemu_devtree_setprop_string(fdt, "/memory", "device_type", "memory");
qemu_devtree_setprop(fdt, "/memory", "reg", mem_reg_property,
sizeof(mem_reg_property));
qemu_devtree_add_subnode(fdt, "/chosen");
if (initrd_size) {
ret = qemu_devtree_setprop_cell(fdt, "/chosen", "linux,initrd-start",
initrd_base);
if (ret < 0) {
fprintf(stderr, "couldn't set /chosen/linux,initrd-start\n");
}
ret = qemu_devtree_setprop_cell(fdt, "/chosen", "linux,initrd-end",
(initrd_base + initrd_size));
if (ret < 0) {
fprintf(stderr, "couldn't set /chosen/linux,initrd-end\n");
}
}
ret = qemu_devtree_setprop_string(fdt, "/chosen", "bootargs",
kernel_cmdline);
if (ret < 0)
fprintf(stderr, "couldn't set /chosen/bootargs\n");
if (kvm_enabled()) {
clock_freq = kvmppc_get_clockfreq();
tb_freq = kvmppc_get_tbfreq();
qemu_devtree_add_subnode(fdt, "/hypervisor");
qemu_devtree_setprop_string(fdt, "/hypervisor", "compatible",
"linux,kvm");
kvmppc_get_hypercall(env, hypercall, sizeof(hypercall));
qemu_devtree_setprop(fdt, "/hypervisor", "hcall-instructions",
hypercall, sizeof(hypercall));
}
qemu_devtree_add_subnode(fdt, "/cpus");
qemu_devtree_setprop_cell(fdt, "/cpus", "#address-cells", 1);
qemu_devtree_setprop_cell(fdt, "/cpus", "#size-cells", 0);
for (i = smp_cpus - 1; i >= 0; i--) {
char cpu_name[128];
uint64_t cpu_release_addr = MPC8544_SPIN_BASE + (i * 0x20);
for (env = first_cpu; env != NULL; env = env->next_cpu) {
if (env->cpu_index == i) {
break;
}
}
if (!env) {
continue;
}
snprintf(cpu_name, sizeof(cpu_name), "/cpus/PowerPC,8544@%x", env->cpu_index);
qemu_devtree_add_subnode(fdt, cpu_name);
qemu_devtree_setprop_cell(fdt, cpu_name, "clock-frequency", clock_freq);
qemu_devtree_setprop_cell(fdt, cpu_name, "timebase-frequency", tb_freq);
qemu_devtree_setprop_string(fdt, cpu_name, "device_type", "cpu");
qemu_devtree_setprop_cell(fdt, cpu_name, "reg", env->cpu_index);
qemu_devtree_setprop_cell(fdt, cpu_name, "d-cache-line-size",
env->dcache_line_size);
qemu_devtree_setprop_cell(fdt, cpu_name, "i-cache-line-size",
env->icache_line_size);
qemu_devtree_setprop_cell(fdt, cpu_name, "d-cache-size", 0x8000);
qemu_devtree_setprop_cell(fdt, cpu_name, "i-cache-size", 0x8000);
qemu_devtree_setprop_cell(fdt, cpu_name, "bus-frequency", 0);
if (env->cpu_index) {
qemu_devtree_setprop_string(fdt, cpu_name, "status", "disabled");
qemu_devtree_setprop_string(fdt, cpu_name, "enable-method", "spin-table");
qemu_devtree_setprop_u64(fdt, cpu_name, "cpu-release-addr",
cpu_release_addr);
} else {
qemu_devtree_setprop_string(fdt, cpu_name, "status", "okay");
}
}
qemu_devtree_add_subnode(fdt, "/aliases");
snprintf(soc, sizeof(soc), "/soc8544@%x", MPC8544_CCSRBAR_BASE);
qemu_devtree_add_subnode(fdt, soc);
qemu_devtree_setprop_string(fdt, soc, "device_type", "soc");
qemu_devtree_setprop_string(fdt, soc, "compatible", "simple-bus");
qemu_devtree_setprop_cell(fdt, soc, "#address-cells", 1);
qemu_devtree_setprop_cell(fdt, soc, "#size-cells", 1);
qemu_devtree_setprop_cells(fdt, soc, "ranges", 0x0, MPC8544_CCSRBAR_BASE,
MPC8544_CCSRBAR_SIZE);
qemu_devtree_setprop_cells(fdt, soc, "reg", MPC8544_CCSRBAR_BASE,
MPC8544_CCSRBAR_REGSIZE);
qemu_devtree_setprop_cell(fdt, soc, "bus-frequency", 0);
snprintf(mpic, sizeof(mpic), "%s/pic@%x", soc,
MPC8544_MPIC_REGS_BASE - MPC8544_CCSRBAR_BASE);
qemu_devtree_add_subnode(fdt, mpic);
qemu_devtree_setprop_string(fdt, mpic, "device_type", "open-pic");
qemu_devtree_setprop_string(fdt, mpic, "compatible", "chrp,open-pic");
qemu_devtree_setprop_cells(fdt, mpic, "reg", MPC8544_MPIC_REGS_BASE -
MPC8544_CCSRBAR_BASE, 0x40000);
qemu_devtree_setprop_cell(fdt, mpic, "#address-cells", 0);
qemu_devtree_setprop_cell(fdt, mpic, "#interrupt-cells", 2);
mpic_ph = qemu_devtree_alloc_phandle(fdt);
qemu_devtree_setprop_cell(fdt, mpic, "phandle", mpic_ph);
qemu_devtree_setprop_cell(fdt, mpic, "linux,phandle", mpic_ph);
qemu_devtree_setprop(fdt, mpic, "interrupt-controller", NULL, 0);
snprintf(ser1, sizeof(ser1), "%s/serial@%x", soc,
MPC8544_SERIAL1_REGS_BASE - MPC8544_CCSRBAR_BASE);
qemu_devtree_add_subnode(fdt, ser1);
qemu_devtree_setprop_string(fdt, ser1, "device_type", "serial");
qemu_devtree_setprop_string(fdt, ser1, "compatible", "ns16550");
qemu_devtree_setprop_cells(fdt, ser1, "reg", MPC8544_SERIAL1_REGS_BASE -
MPC8544_CCSRBAR_BASE, 0x100);
qemu_devtree_setprop_cell(fdt, ser1, "cell-index", 1);
qemu_devtree_setprop_cell(fdt, ser1, "clock-frequency", 0);
qemu_devtree_setprop_cells(fdt, ser1, "interrupts", 42, 2);
qemu_devtree_setprop_phandle(fdt, ser1, "interrupt-parent", mpic);
qemu_devtree_setprop_string(fdt, "/aliases", "serial1", ser1);
snprintf(ser0, sizeof(ser0), "%s/serial@%x", soc,
MPC8544_SERIAL0_REGS_BASE - MPC8544_CCSRBAR_BASE);
qemu_devtree_add_subnode(fdt, ser0);
qemu_devtree_setprop_string(fdt, ser0, "device_type", "serial");
qemu_devtree_setprop_string(fdt, ser0, "compatible", "ns16550");
qemu_devtree_setprop_cells(fdt, ser0, "reg", MPC8544_SERIAL0_REGS_BASE -
MPC8544_CCSRBAR_BASE, 0x100);
qemu_devtree_setprop_cell(fdt, ser0, "cell-index", 0);
qemu_devtree_setprop_cell(fdt, ser0, "clock-frequency", 0);
qemu_devtree_setprop_cells(fdt, ser0, "interrupts", 42, 2);
qemu_devtree_setprop_phandle(fdt, ser0, "interrupt-parent", mpic);
qemu_devtree_setprop_string(fdt, "/aliases", "serial0", ser0);
qemu_devtree_setprop_string(fdt, "/chosen", "linux,stdout-path", ser0);
snprintf(gutil, sizeof(gutil), "%s/global-utilities@%x", soc,
MPC8544_UTIL_BASE - MPC8544_CCSRBAR_BASE);
qemu_devtree_add_subnode(fdt, gutil);
qemu_devtree_setprop_string(fdt, gutil, "compatible", "fsl,mpc8544-guts");
qemu_devtree_setprop_cells(fdt, gutil, "reg", MPC8544_UTIL_BASE -
MPC8544_CCSRBAR_BASE, 0x1000);
qemu_devtree_setprop(fdt, gutil, "fsl,has-rstcr", NULL, 0);
snprintf(pci, sizeof(pci), "/pci@%x", MPC8544_PCI_REGS_BASE);
qemu_devtree_add_subnode(fdt, pci);
qemu_devtree_setprop_cell(fdt, pci, "cell-index", 0);
qemu_devtree_setprop_string(fdt, pci, "compatible", "fsl,mpc8540-pci");
qemu_devtree_setprop_string(fdt, pci, "device_type", "pci");
qemu_devtree_setprop_cells(fdt, pci, "interrupt-map-mask", 0xf800, 0x0,
0x0, 0x7);
pci_map_create(fdt, pci_map, qemu_devtree_get_phandle(fdt, mpic));
qemu_devtree_setprop(fdt, pci, "interrupt-map", pci_map, sizeof(pci_map));
qemu_devtree_setprop_phandle(fdt, pci, "interrupt-parent", mpic);
qemu_devtree_setprop_cells(fdt, pci, "interrupts", 24, 2);
qemu_devtree_setprop_cells(fdt, pci, "bus-range", 0, 255);
for (i = 0; i < 12; i++) {
pci_ranges[i] = cpu_to_be32(pci_ranges[i]);
}
qemu_devtree_setprop(fdt, pci, "ranges", pci_ranges, sizeof(pci_ranges));
qemu_devtree_setprop_cells(fdt, pci, "reg", MPC8544_PCI_REGS_BASE,
0x1000);
qemu_devtree_setprop_cell(fdt, pci, "clock-frequency", 66666666);
qemu_devtree_setprop_cell(fdt, pci, "#interrupt-cells", 1);
qemu_devtree_setprop_cell(fdt, pci, "#size-cells", 2);
qemu_devtree_setprop_cell(fdt, pci, "#address-cells", 3);
qemu_devtree_setprop_string(fdt, "/aliases", "pci0", pci);
machine_opts = qemu_opts_find(qemu_find_opts("machine"), 0);
if (machine_opts) {
dumpdtb = qemu_opt_get(machine_opts, "dumpdtb");
}
if (dumpdtb) {
FILE *f = fopen(dumpdtb, "wb");
size_t len;
len = fwrite(fdt, fdt_size, 1, f);
fclose(f);
if (len != fdt_size) {
exit(1);
}
exit(0);
}
ret = rom_add_blob_fixed(BINARY_DEVICE_TREE_FILE, fdt, fdt_size, addr);
if (ret < 0) {
goto out;
}
g_free(fdt);
ret = fdt_size;
out:
return ret;
}
| 1threat |
Using a button to add images and text to webpage : <p>So I’m making a website for a friend and they would like to be able to add images onto the site without me having to manually go in and put them on through my compiler. Can anyone tell me how I would go about putting something like that into my code. I’ve looked everywhere online and can’t find anything!</p>
| 0debug |
int av_opt_set(void *obj, const char *name, const char *val, int search_flags)
{
int ret;
void *dst, *target_obj;
const AVOption *o = av_opt_find2(obj, name, NULL, 0, search_flags, &target_obj);
if (!o || !target_obj)
return AVERROR_OPTION_NOT_FOUND;
if (!val && (o->type != AV_OPT_TYPE_STRING &&
o->type != AV_OPT_TYPE_PIXEL_FMT && o->type != AV_OPT_TYPE_SAMPLE_FMT &&
o->type != AV_OPT_TYPE_IMAGE_SIZE && o->type != AV_OPT_TYPE_VIDEO_RATE &&
o->type != AV_OPT_TYPE_DURATION && o->type != AV_OPT_TYPE_COLOR &&
o->type != AV_OPT_TYPE_CHANNEL_LAYOUT))
return AVERROR(EINVAL);
dst = ((uint8_t*)target_obj) + o->offset;
switch (o->type) {
case AV_OPT_TYPE_STRING: return set_string(obj, o, val, dst);
case AV_OPT_TYPE_BINARY: return set_string_binary(obj, o, val, dst);
case AV_OPT_TYPE_FLAGS:
case AV_OPT_TYPE_INT:
case AV_OPT_TYPE_INT64:
case AV_OPT_TYPE_FLOAT:
case AV_OPT_TYPE_DOUBLE:
case AV_OPT_TYPE_RATIONAL: return set_string_number(obj, target_obj, o, val, dst);
case AV_OPT_TYPE_IMAGE_SIZE:
if (!val || !strcmp(val, "none")) {
*(int *)dst = *((int *)dst + 1) = 0;
return 0;
}
ret = av_parse_video_size(dst, ((int *)dst) + 1, val);
if (ret < 0)
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as image size\n", val);
return ret;
case AV_OPT_TYPE_VIDEO_RATE:
if (!val) {
ret = AVERROR(EINVAL);
} else {
ret = av_parse_video_rate(dst, val);
}
if (ret < 0)
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as video rate\n", val);
return ret;
case AV_OPT_TYPE_PIXEL_FMT:
if (!val || !strcmp(val, "none")) {
ret = AV_PIX_FMT_NONE;
} else {
ret = av_get_pix_fmt(val);
if (ret == AV_PIX_FMT_NONE) {
char *tail;
ret = strtol(val, &tail, 0);
if (*tail || (unsigned)ret >= AV_PIX_FMT_NB) {
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as pixel format\n", val);
return AVERROR(EINVAL);
}
}
}
*(enum AVPixelFormat *)dst = ret;
return 0;
case AV_OPT_TYPE_SAMPLE_FMT:
if (!val || !strcmp(val, "none")) {
ret = AV_SAMPLE_FMT_NONE;
} else {
ret = av_get_sample_fmt(val);
if (ret == AV_SAMPLE_FMT_NONE) {
char *tail;
ret = strtol(val, &tail, 0);
if (*tail || (unsigned)ret >= AV_SAMPLE_FMT_NB) {
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as sample format\n", val);
return AVERROR(EINVAL);
}
}
}
*(enum AVSampleFormat *)dst = ret;
return 0;
case AV_OPT_TYPE_DURATION:
if (!val) {
*(int64_t *)dst = 0;
return 0;
} else {
if ((ret = av_parse_time(dst, val, 1)) < 0)
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as duration\n", val);
return ret;
}
break;
case AV_OPT_TYPE_COLOR:
if (!val) {
return 0;
} else {
ret = av_parse_color(dst, val, -1, obj);
if (ret < 0)
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as color\n", val);
return ret;
}
break;
case AV_OPT_TYPE_CHANNEL_LAYOUT:
if (!val || !strcmp(val, "none")) {
*(int64_t *)dst = 0;
} else {
#if FF_API_GET_CHANNEL_LAYOUT_COMPAT
int64_t cl = ff_get_channel_layout(val, 0);
#else
int64_t cl = av_get_channel_layout(val);
#endif
if (!cl) {
av_log(obj, AV_LOG_ERROR, "Unable to parse option value \"%s\" as channel layout\n", val);
ret = AVERROR(EINVAL);
}
*(int64_t *)dst = cl;
return ret;
}
break;
}
av_log(obj, AV_LOG_ERROR, "Invalid option type.\n");
return AVERROR(EINVAL);
}
| 1threat |
int fw_cfg_add_i32(FWCfgState *s, uint16_t key, uint32_t value)
{
uint32_t *copy;
copy = g_malloc(sizeof(value));
*copy = cpu_to_le32(value);
return fw_cfg_add_bytes(s, key, (uint8_t *)copy, sizeof(value));
}
| 1threat |
How to throw exception in jenkins pipeline? : <p>I have handled the Jenkins pipeline steps with try catch blocks. I want to throw an exception manually for some cases. but it shows the below error.</p>
<pre><code>org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new java.io.IOException java.lang.String
</code></pre>
<p>I checked the scriptApproval section and there is no pending approvals.</p>
| 0debug |
db.execute('SELECT * FROM employees WHERE id = ' + user_input) | 1threat |
Failed resolution of: Lcom/google/android/gms/common/api/Api$zzf; : <p>I got this error when we run apk file of our application. In <code>build.gradle</code> we set multidex and compile multidex is existed in Gradle file . We changed the version of Firebase versions to above and below but that's did not work for us . This is our full log in Run console :</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>D/AndroidRuntime: Shutting down VM
E/AndroidRuntime: FATAL EXCEPTION: main
Process: ir.parsinteam.ojoobe, PID: 5141
java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/android/gms/common/api/Api$zzf;
at com.google.android.gms.location.LocationServices.<clinit>(Unknown Source)
at ir.adad.client.LocationMethods.callAndroidLocationService(LocationMethods.java:101)
at ir.adad.client.LocationMethods.<init>(LocationMethods.java:40)
at ir.adad.client.LocationMethods.getInstance(LocationMethods.java:45)
at ir.adad.client.AdadScript.urlParameters(AdadScript.java:390)
at ir.adad.client.AdadScript.downloadClient(AdadScript.java:148)
at ir.adad.client.AdadScript.initializeInternal(AdadScript.java:134)
at ir.adad.client.AdadScript.initializeClient(AdadScript.java:110)
at ir.adad.client.Adad.initialize(Adad.java:22)
at ir.parsinteam.ojoobe.activities.MainActivity.onCreate(MainActivity.java:62)
at android.app.Activity.performCreate(Activity.java:6662)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1118)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2599)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2707)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1460)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6077)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.google.android.gms.common.api.Api$zzf" on path: DexPathList[[zip file "/data/app/ir.parsinteam.ojoobe-2/base.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_dependencies_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_0_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_1_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_2_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_3_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_4_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_5_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_6_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_7_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_8_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_9_apk.apk"],nativeLibraryDirectories=[/data/app/ir.parsinteam.ojoobe-2/lib/x86, /data/app/ir.parsinteam.ojoobe-2/base.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_dependencies_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_0_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_1_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_2_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_3_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_4_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_5_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_6_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_7_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_8_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_9_apk.apk!/lib/x86, /system/lib, /vendor/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
at java.lang.ClassLoader.loadClass(ClassLoader.java:380)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
at com.google.android.gms.location.LocationServices.<clinit>(Unknown Source)
at ir.adad.client.LocationMethods.callAndroidLocationService(LocationMethods.java:101)
at ir.adad.client.LocationMethods.<init>(LocationMethods.java:40)
at ir.adad.client.LocationMethods.getInstance(LocationMethods.java:45)
at ir.adad.client.AdadScript.urlParameters(AdadScript.java:390)
at ir.adad.client.AdadScript.downloadClient(AdadScript.java:148)
at ir.adad.client.AdadScript.initializeInternal(AdadScript.java:134)
at ir.adad.client.AdadScript.initializeClient(AdadScript.java:110)
at ir.adad.client.Adad.initialize(Adad.java:22)
at ir.parsinteam.ojoobe.activities.MainActivity.onCreate(MainActivity.java:62)
at android.app.Activity.performCreate(Activity.java:6662)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1118)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2599)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2707)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1460)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6077)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)
Application terminated.</code></pre>
</div>
</div>
</p>
| 0debug |
static int decode_chunks(AVCodecContext *avctx,
AVFrame *picture, int *got_output,
const uint8_t *buf, int buf_size)
{
Mpeg1Context *s = avctx->priv_data;
MpegEncContext *s2 = &s->mpeg_enc_ctx;
const uint8_t *buf_ptr = buf;
const uint8_t *buf_end = buf + buf_size;
int ret, input_size;
int last_code = 0, skip_frame = 0;
int picture_start_code_seen = 0;
for (;;) {
uint32_t start_code = -1;
buf_ptr = avpriv_find_start_code(buf_ptr, buf_end, &start_code);
if (start_code > 0x1ff) {
if (!skip_frame) {
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_SLICE) &&
!avctx->hwaccel) {
int i;
av_assert0(avctx->thread_count > 1);
avctx->execute(avctx, slice_decode_thread, &s2->thread_context[0], NULL, s->slice_count, sizeof(void*));
for (i = 0; i < s->slice_count; i++)
s2->er.error_count += s2->thread_context[i]->er.error_count;
ret = slice_end(avctx, picture);
if (ret < 0)
return ret;
else if (ret) {
if (s2->last_picture_ptr || s2->low_delay)
*got_output = 1;
s2->pict_type = 0;
return FFMAX(0, buf_ptr - buf - s2->parse_context.last_index);
input_size = buf_end - buf_ptr;
if (avctx->debug & FF_DEBUG_STARTCODE) {
av_log(avctx, AV_LOG_DEBUG, "%3X at %td left %d\n", start_code, buf_ptr-buf, input_size);
switch (start_code) {
case SEQ_START_CODE:
if (last_code == 0) {
mpeg1_decode_sequence(avctx, buf_ptr, input_size);
if(buf != avctx->extradata)
s->sync=1;
} else {
av_log(avctx, AV_LOG_ERROR, "ignoring SEQ_START_CODE after %X\n", last_code);
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
case PICTURE_START_CODE:
if (picture_start_code_seen && s2->picture_structure == PICT_FRAME) {
av_log(avctx, AV_LOG_WARNING, "ignoring extra picture following a frame-picture\n");
break;
picture_start_code_seen = 1;
if (s2->width <= 0 || s2->height <= 0) {
av_log(avctx, AV_LOG_ERROR, "Invalid frame dimensions %dx%d.\n",
s2->width, s2->height);
if(s->tmpgexs){
s2->intra_dc_precision= 3;
s2->intra_matrix[0]= 1;
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_SLICE) &&
!avctx->hwaccel && s->slice_count) {
int i;
avctx->execute(avctx, slice_decode_thread,
s2->thread_context, NULL,
s->slice_count, sizeof(void*));
for (i = 0; i < s->slice_count; i++)
s2->er.error_count += s2->thread_context[i]->er.error_count;
s->slice_count = 0;
if (last_code == 0 || last_code == SLICE_MIN_START_CODE) {
ret = mpeg_decode_postinit(avctx);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "mpeg_decode_postinit() failure\n");
return ret;
if (mpeg1_decode_picture(avctx, buf_ptr, input_size) < 0)
s2->pict_type = 0;
s2->first_slice = 1;
last_code = PICTURE_START_CODE;
} else {
av_log(avctx, AV_LOG_ERROR, "ignoring pic after %X\n", last_code);
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
case EXT_START_CODE:
init_get_bits(&s2->gb, buf_ptr, input_size*8);
switch (get_bits(&s2->gb, 4)) {
case 0x1:
if (last_code == 0) {
mpeg_decode_sequence_extension(s);
} else {
av_log(avctx, AV_LOG_ERROR, "ignoring seq ext after %X\n", last_code);
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
case 0x2:
mpeg_decode_sequence_display_extension(s);
break;
case 0x3:
mpeg_decode_quant_matrix_extension(s2);
break;
case 0x7:
mpeg_decode_picture_display_extension(s);
break;
case 0x8:
if (last_code == PICTURE_START_CODE) {
mpeg_decode_picture_coding_extension(s);
} else {
av_log(avctx, AV_LOG_ERROR, "ignoring pic cod ext after %X\n", last_code);
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
break;
case USER_START_CODE:
mpeg_decode_user_data(avctx, buf_ptr, input_size);
break;
case GOP_START_CODE:
if (last_code == 0) {
s2->first_field=0;
mpeg_decode_gop(avctx, buf_ptr, input_size);
s->sync=1;
} else {
av_log(avctx, AV_LOG_ERROR, "ignoring GOP_START_CODE after %X\n", last_code);
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
default:
if (start_code >= SLICE_MIN_START_CODE &&
start_code <= SLICE_MAX_START_CODE && last_code == PICTURE_START_CODE) {
if (s2->progressive_sequence && !s2->progressive_frame) {
s2->progressive_frame = 1;
av_log(s2->avctx, AV_LOG_ERROR, "interlaced frame in progressive sequence, ignoring\n");
if (s2->picture_structure == 0 || (s2->progressive_frame && s2->picture_structure != PICT_FRAME)) {
av_log(s2->avctx, AV_LOG_ERROR, "picture_structure %d invalid, ignoring\n", s2->picture_structure);
s2->picture_structure = PICT_FRAME;
if (s2->progressive_sequence && !s2->frame_pred_frame_dct) {
av_log(s2->avctx, AV_LOG_WARNING, "invalid frame_pred_frame_dct\n");
if (s2->picture_structure == PICT_FRAME) {
s2->first_field = 0;
s2->v_edge_pos = 16 * s2->mb_height;
} else {
s2->first_field ^= 1;
s2->v_edge_pos = 8 * s2->mb_height;
memset(s2->mbskip_table, 0, s2->mb_stride * s2->mb_height);
if (start_code >= SLICE_MIN_START_CODE &&
start_code <= SLICE_MAX_START_CODE && last_code != 0) {
const int field_pic = s2->picture_structure != PICT_FRAME;
int mb_y = start_code - SLICE_MIN_START_CODE;
last_code = SLICE_MIN_START_CODE;
if(s2->codec_id != AV_CODEC_ID_MPEG1VIDEO && s2->mb_height > 2800/16)
mb_y += (*buf_ptr&0xE0)<<2;
mb_y <<= field_pic;
if (s2->picture_structure == PICT_BOTTOM_FIELD)
mb_y++;
if (mb_y >= s2->mb_height) {
av_log(s2->avctx, AV_LOG_ERROR, "slice below image (%d >= %d)\n", mb_y, s2->mb_height);
return -1;
if (s2->last_picture_ptr == NULL) {
if (s2->pict_type == AV_PICTURE_TYPE_B) {
if (!s2->closed_gop) {
skip_frame = 1;
break;
if (s2->pict_type == AV_PICTURE_TYPE_I || (s2->flags2 & CODEC_FLAG2_SHOW_ALL))
s->sync=1;
if (s2->next_picture_ptr == NULL) {
if (s2->pict_type == AV_PICTURE_TYPE_P && !s->sync) {
skip_frame = 1;
break;
if ((avctx->skip_frame >= AVDISCARD_NONREF && s2->pict_type == AV_PICTURE_TYPE_B) ||
(avctx->skip_frame >= AVDISCARD_NONKEY && s2->pict_type != AV_PICTURE_TYPE_I) ||
avctx->skip_frame >= AVDISCARD_ALL) {
skip_frame = 1;
break;
if (!s->mpeg_enc_ctx_allocated)
break;
if (s2->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
if (mb_y < avctx->skip_top || mb_y >= s2->mb_height - avctx->skip_bottom)
break;
if (!s2->pict_type) {
av_log(avctx, AV_LOG_ERROR, "Missing picture start code\n");
if (avctx->err_recognition & AV_EF_EXPLODE)
break;
if (s2->first_slice) {
skip_frame = 0;
s2->first_slice = 0;
if (mpeg_field_start(s2, buf, buf_size) < 0)
return -1;
if (!s2->current_picture_ptr) {
av_log(avctx, AV_LOG_ERROR, "current_picture not initialized\n");
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_SLICE) &&
!avctx->hwaccel) {
int threshold = (s2->mb_height * s->slice_count +
s2->slice_context_count / 2) /
s2->slice_context_count;
av_assert0(avctx->thread_count > 1);
if (threshold <= mb_y) {
MpegEncContext *thread_context = s2->thread_context[s->slice_count];
thread_context->start_mb_y = mb_y;
thread_context->end_mb_y = s2->mb_height;
if (s->slice_count) {
s2->thread_context[s->slice_count-1]->end_mb_y = mb_y;
ret = ff_update_duplicate_context(thread_context,
s2);
if (ret < 0)
return ret;
init_get_bits(&thread_context->gb, buf_ptr, input_size*8);
s->slice_count++;
buf_ptr += 2;
} else {
ret = mpeg_decode_slice(s2, mb_y, &buf_ptr, input_size);
emms_c();
if (ret < 0) {
if (avctx->err_recognition & AV_EF_EXPLODE)
return ret;
if (s2->resync_mb_x >= 0 && s2->resync_mb_y >= 0)
ff_er_add_slice(&s2->er, s2->resync_mb_x, s2->resync_mb_y, s2->mb_x, s2->mb_y, ER_AC_ERROR | ER_DC_ERROR | ER_MV_ERROR);
} else {
ff_er_add_slice(&s2->er, s2->resync_mb_x, s2->resync_mb_y, s2->mb_x-1, s2->mb_y, ER_AC_END | ER_DC_END | ER_MV_END);
break; | 1threat |
How do you exit a Raspberry Pi terminal? I've tried cltr+c but it doesn't work : <p>I've tried closing a rasberry pi terminal process the same way as any other linux terminal, but it doesn't seem to work for me.</p>
| 0debug |
static int adts_aac_probe(AVProbeData *p)
{
int max_frames = 0, first_frames = 0;
int fsize, frames;
uint8_t *buf0 = p->buf;
uint8_t *buf2;
uint8_t *buf;
uint8_t *end = buf0 + p->buf_size - 7;
buf = buf0;
for(; buf < end; buf= buf2+1) {
buf2 = buf;
for(frames = 0; buf2 < end; frames++) {
uint32_t header = AV_RB16(buf2);
if((header&0xFFF6) != 0xFFF0)
break;
fsize = (AV_RB32(buf2 + 3) >> 13) & 0x1FFF;
if(fsize < 7)
break;
buf2 += fsize;
}
max_frames = FFMAX(max_frames, frames);
if(buf == buf0)
first_frames= frames;
}
if (first_frames>=3) return AVPROBE_SCORE_MAX/2+1;
else if(max_frames>500)return AVPROBE_SCORE_MAX/2;
else if(max_frames>=3) return AVPROBE_SCORE_MAX/4;
else if(max_frames>=1) return 1;
else return 0;
} | 1threat |
How to use modulo to find if one number is divisible by the second in python? : <p>I am new to python and have been trying to solve questions I have found online, but I am stuck on one:</p>
<p>"Write a program which takes two integers as input. If the first is exactly divisible by the second (such as 10 and 5 or 24 and 8, but not 10 and 3 or 24 and 7) it outputs “Yes”, otherwise “No”, except when the second is zero, in which case it outputs “Cannot divide by zero”. Remember you can use the modulo operator (“%”) to find out whether one number is divisible by another."</p>
| 0debug |
How to load YouTube live video streaming in unity? : <p>I need some doubt about youtube live video streaming. how to load that youtube live video stream inside of unity any possible. because unity default video player does not support youtube URL that only any other player for youtube URL supporting are not plz replay soon.</p>
| 0debug |
eclipse failing to print the output becouse of some error : Im starting leanining java but even why i did everything as the guy in tutorial it is saying that i ahve an error.
this is my code:
class code{
public static void main(String args[])
{
System.out.println("coding...");
}
}
the output is this :
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
at code.code.main(code.java:2)
Hope u ll help me sort this out **thanks**.
| 0debug |
size_t qemu_mempath_getpagesize(const char *mem_path)
{
#ifdef CONFIG_LINUX
struct statfs fs;
int ret;
do {
ret = statfs(mem_path, &fs);
} while (ret != 0 && errno == EINTR);
if (ret != 0) {
fprintf(stderr, "Couldn't statfs() memory path: %s\n",
strerror(errno));
exit(1);
}
if (fs.f_type == HUGETLBFS_MAGIC) {
return fs.f_bsize;
}
return getpagesize();
} | 1threat |
How can I Avoid NaN,While adding numbers from Array in Java Script? : <p><strong>While I get Inputs from text box,and adding those input if i leave space then my result comes with NAN.</strong></p>
| 0debug |
void rdma_start_outgoing_migration(void *opaque,
const char *host_port, Error **errp)
{
MigrationState *s = opaque;
Error *local_err = NULL, **temp = &local_err;
RDMAContext *rdma = qemu_rdma_data_init(host_port, &local_err);
int ret = 0;
if (rdma == NULL) {
ERROR(temp, "Failed to initialize RDMA data structures! %d", ret);
goto err;
}
ret = qemu_rdma_source_init(rdma, &local_err,
s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL]);
if (ret) {
goto err;
}
trace_rdma_start_outgoing_migration_after_rdma_source_init();
ret = qemu_rdma_connect(rdma, &local_err);
if (ret) {
goto err;
}
trace_rdma_start_outgoing_migration_after_rdma_connect();
s->to_dst_file = qemu_fopen_rdma(rdma, "wb");
migrate_fd_connect(s);
return;
err:
error_propagate(errp, local_err);
g_free(rdma);
migrate_fd_error(s);
}
| 1threat |
How do i extract or cut particular data from file | Linux | : I have a Scenario where I want to cut only particular data from file
My file contain below data
/f/demo/Dummy/g/STSE/abc.xml:262:123: <NAME ="ABC_BCD">
/f/demo/Dummy/g/STSE/cde.xml:263:ABX: <NAME ="ABC_BCDXXXXX OBH=TYPE">
/f/demo/Dummy/g/STSE/12a.xml:264:2:456: <NAME ="ABC_BCD">
/f/demo/Dummy/g/STSE/a2c.xml:265: <NAME ="ABC_BCD">
/f/demo/Dummy/g/STSE/wed.xml:266: <NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
/f/demo/Dummy/g/STSE/as.xml:267:234: <NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
/f/demo/Dummy/g/STSE/ass.xml:268:LMD : <NAME ="ABC_BCD" TYPE=LS OBG=UI>
/f/demo/Dummy/g/STSE/sc.xml:269:22221: <NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
I need output in below format only excluding duplicate
<NAME ="ABC_BCD">
<NAME ="ABC_BCDXXXXX OBH=TYPE">
<NAME ="ABC_BCD">
<NAME ="ABC_BCD">
<NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
<NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
<NAME ="ABC_BCD" TYPE=LS OBG=UI>
<NAME ="ABC_BCD" TYPE=LS OBG=UI RML=HJ>
I used the below command not worked
sed 's/[[:digit:]]/+[[:space:]]/+ filename | 0debug |
static void css_inject_io_interrupt(SubchDev *sch)
{
S390CPU *cpu = s390_cpu_addr2state(0);
uint8_t isc = (sch->curr_status.pmcw.flags & PMCW_FLAGS_MASK_ISC) >> 11;
trace_css_io_interrupt(sch->cssid, sch->ssid, sch->schid,
sch->curr_status.pmcw.intparm, isc, "");
s390_io_interrupt(cpu,
css_build_subchannel_id(sch),
sch->schid,
sch->curr_status.pmcw.intparm,
isc << 27);
}
| 1threat |
In C how to catch when an float = -1.#IO : So I have a problem in my C code where when a float retches infinity it returns -1.#IO
At first I had no idea what it meant but after a while of searching on stack over flow I found only a total of 3 posts that even mentioned this value let alone how to solve the issue. That is where I lent that this value is what is returned if the float reaches -infinity.
SO the problem is float can onley use numerical values. I can not catch this value
if I put
if(value == -1.#IO){...}
the compiler says unexpected #
if i put
if(value == "-1.#IO"){...}
the compiler says Char string constant '"-1.#IO"' cannot be compared with value
This is obvious because its trying to compare an string with an float
Now my formula calculates a range of values of which both negative and positive infinity can some times exist.
So I need to find a way to catch this value when it pops up so I can replace it with an numerical float value (witch in this case will be 0)
If you want I can add my formula code to this post but I think it is unnecessary for the question.
| 0debug |
How to make array into a variabel : I make input form into array, how to transform the array data into a variable when i send form with method post
[enter image description here][1]
[1]: https://i.stack.imgur.com/LSI4k.png | 0debug |
How do I get Java FX running with OpenJDK 8 on Ubuntu 18.04.2 LTS? : <p>When trying to compile an JavaFX application in the environment:</p>
<pre><code>java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b03-0ubuntu1.18.04.1-b03)
OpenJDK 64-Bit Server VM (build 25.212-b03, mixed mode)
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"
</code></pre>
<p>I get the error-message:</p>
<pre><code>cannot access javafx.event.EventHandler
[ERROR] class file for javafx.event.EventHandler not found
</code></pre>
<p>I tried to find a solution by following these links:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/50510033/how-to-add-javafx-dependencies-in-maven-with-java-10/50511752">how to add javafx dependencies in maven with java 10</a></li>
<li><a href="https://mvnrepository.com/artifact/org.openjfx/javafx/11" rel="noreferrer">https://mvnrepository.com/artifact/org.openjfx/javafx/11</a></li>
<li><a href="https://stackoverflow.com/questions/15278215/maven-project-with-javafx-with-jar-file-in-lib">Maven project with JavaFX (with jar file in `lib`)</a></li>
<li><a href="https://github.com/javafx-maven-plugin/javafx-maven-plugin" rel="noreferrer">https://github.com/javafx-maven-plugin/javafx-maven-plugin</a></li>
<li><a href="https://askubuntu.com/questions/1091157/javafx-missing-ubuntu-18-04">https://askubuntu.com/questions/1091157/javafx-missing-ubuntu-18-04</a></li>
<li><a href="https://unix.stackexchange.com/questions/505628/add-openjfx-class-path-in-debian-for-java11">https://unix.stackexchange.com/questions/505628/add-openjfx-class-path-in-debian-for-java11</a></li>
<li><a href="https://askubuntu.com/questions/609951/javafx-is-not-on-the-default-classpath-even-with-oracle-jdk-1-8">https://askubuntu.com/questions/609951/javafx-is-not-on-the-default-classpath-even-with-oracle-jdk-1-8</a></li>
<li><a href="https://stackoverflow.com/questions/34243982/why-is-javafx-is-not-included-in-openjdk-8-on-ubuntu-wily-15-10/34244308#34244308">Why is JavaFX is not included in OpenJDK 8 on Ubuntu Wily (15.10)?</a></li>
<li><a href="http://can4eve.bitplan.com/index.php/JavaFX" rel="noreferrer">http://can4eve.bitplan.com/index.php/JavaFX</a></li>
</ul>
<p>The most promising actions where to </p>
<ol>
<li>install openjfx with apt install openjfx</li>
<li>set the JAVA_HOME environment variable to /usr/lib/jvm/java-8-openjdk-amd64</li>
</ol>
<p>But the error persists.</p>
<p><strong>What needs to be done to get OpenJDK 8 and JavaFX working on Ubuntu 18.04.2 LTS?</strong></p>
| 0debug |
static av_cold int rpza_decode_init(AVCodecContext *avctx)
{
RpzaContext *s = avctx->priv_data;
s->avctx = avctx;
avctx->pix_fmt = AV_PIX_FMT_RGB555;
s->frame.data[0] = NULL;
return 0;
}
| 1threat |
Is there a best practice on setting up glibc on docker alpine linux base image? : <p>Is there a best practice on setting up glibc on docker alpine linux base image with correct paths so any spawned process can correctly reference the location of the installed libc libraries?</p>
| 0debug |
how to combine two array in javascript ? without using underscore : i have two array
var a=[
{_id:1,name:'a'},
{_id:2,name:'b'},
{_id:3,name:'c'},
]
var b=[
{key:1,dis:0.5},
{key:2,dis:0.9},
{key:3,dis:10}
]
from this both there is `_id` and `key` , what i want is if both `_id` and `key` are same than i want one array which look like this
var result=[
{_id:1,name:'a',dis:0.5},
{_id:2,name:'b',dis:0.9},
{_id:3,name:'c',dis:1},
]
NOTE : i don't want to use any `_` functions similer like `_.map` or `_.extend`
Thanks | 0debug |
static int disas_cp15_insn(CPUState *env, DisasContext *s, uint32_t insn)
{
uint32_t rd;
TCGv tmp, tmp2;
if (arm_feature(env, ARM_FEATURE_M))
return 1;
if ((insn & (1 << 25)) == 0) {
if (insn & (1 << 20)) {
return 1;
}
return 0;
}
if ((insn & (1 << 4)) == 0) {
return 1;
}
if (IS_USER(s) && !cp15_user_ok(insn)) {
return 1;
}
if ((insn & 0x0fff0fff) == 0x0e070f90) {
if (!arm_feature(env, ARM_FEATURE_V7)) {
gen_set_pc_im(s->pc);
s->is_jmp = DISAS_WFI;
}
return 0;
}
if ((insn & 0x0fff0fff) == 0x0e070f58) {
if (!arm_feature(env, ARM_FEATURE_V6)) {
gen_set_pc_im(s->pc);
s->is_jmp = DISAS_WFI;
return 0;
}
}
rd = (insn >> 12) & 0xf;
if (cp15_tls_load_store(env, s, insn, rd))
return 0;
tmp2 = tcg_const_i32(insn);
if (insn & ARM_CP_RW_BIT) {
tmp = new_tmp();
gen_helper_get_cp15(tmp, cpu_env, tmp2);
if (rd != 15)
store_reg(s, rd, tmp);
else
dead_tmp(tmp);
} else {
tmp = load_reg(s, rd);
gen_helper_set_cp15(cpu_env, tmp2, tmp);
dead_tmp(tmp);
if (!arm_feature(env, ARM_FEATURE_XSCALE) ||
(insn & 0x0fff0fff) != 0x0e010f10)
gen_lookup_tb(s);
}
tcg_temp_free_i32(tmp2);
return 0;
}
| 1threat |
UITabBar selectionIndicatorImage height on iPhone X : <p>I'm using a <code>selectionIndicatorImage</code> for a <code>UITabBar</code>, which is 49 points high, like this: <code>UITabBar.appearance().selectionIndicatorImage = UIImage(named: "bg-tab-selected")</code></p>
<p>Works just fine across all devices:
<a href="https://i.stack.imgur.com/QfliQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QfliQ.png" alt="enter image description here"></a></p>
<p>Except for the iPhone X:</p>
<p><a href="https://i.stack.imgur.com/30wrN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/30wrN.png" alt="enter image description here"></a></p>
<p>I've tried setting the images to be vertically sliced only in the asset catalog, but that doesn't seem to have the desired effect. For some reason it also stretches horizontally? And there is a bit of padding on top.</p>
<p><a href="https://i.stack.imgur.com/bD6n7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bD6n7.png" alt="enter image description here"></a></p>
<p>Any ideas how I can fix this?</p>
| 0debug |
How do I read an exponential double value and output a sum of (X power of 3) + (X power of 2) +X +1? : This is my code, but it's not compiling right.
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
//put your code here
Scanner input=new Scanner(System.in);
double x=input.nextDouble();
x=(xe3)+ (xe2)+ x + 1;
System.out.println(x);
}
} | 0debug |
static ssize_t qio_channel_websock_read_wire(QIOChannelWebsock *ioc,
Error **errp)
{
ssize_t ret;
if (ioc->encinput.offset < 4096) {
size_t want = 4096 - ioc->encinput.offset;
buffer_reserve(&ioc->encinput, want);
ret = qio_channel_read(ioc->master,
(char *)ioc->encinput.buffer +
ioc->encinput.offset,
want,
errp);
if (ret < 0) {
return ret;
}
if (ret == 0 &&
ioc->encinput.offset == 0) {
return 0;
}
ioc->encinput.offset += ret;
}
while (ioc->encinput.offset != 0) {
if (ioc->payload_remain == 0) {
ret = qio_channel_websock_decode_header(ioc, errp);
if (ret < 0) {
return ret;
}
if (ret == 0) {
ioc->io_eof = TRUE;
break;
}
}
ret = qio_channel_websock_decode_payload(ioc, errp);
if (ret < 0) {
return ret;
}
}
return 1;
}
| 1threat |
static int decode_frame(AVCodecContext *avctx,
void *data,
int *data_size,
AVPacket *avpkt)
{
const uint8_t *buf = avpkt->data;
const uint8_t *buf_end = avpkt->data + avpkt->size;
int buf_size = avpkt->size;
DPXContext *const s = avctx->priv_data;
AVFrame *picture = data;
AVFrame *const p = &s->picture;
uint8_t *ptr;
int magic_num, offset, endian;
int x, y;
int w, h, stride, bits_per_color, descriptor, elements, target_packet_size, source_packet_size;
unsigned int rgbBuffer;
magic_num = AV_RB32(buf);
buf += 4;
if (magic_num == AV_RL32("SDPX")) {
endian = 0;
} else if (magic_num == AV_RB32("SDPX")) {
endian = 1;
} else {
av_log(avctx, AV_LOG_ERROR, "DPX marker not found\n");
offset = read32(&buf, endian);
buf = avpkt->data + 0x304;
w = read32(&buf, endian);
h = read32(&buf, endian);
buf += 20;
descriptor = buf[0];
buf += 3;
avctx->bits_per_raw_sample =
bits_per_color = buf[0];
switch (descriptor) {
case 51:
elements = 4;
break;
case 50:
elements = 3;
break;
default:
av_log(avctx, AV_LOG_ERROR, "Unsupported descriptor %d\n", descriptor);
switch (bits_per_color) {
case 8:
if (elements == 4) {
avctx->pix_fmt = PIX_FMT_RGBA;
} else {
avctx->pix_fmt = PIX_FMT_RGB24;
source_packet_size = elements;
target_packet_size = elements;
break;
case 10:
avctx->pix_fmt = PIX_FMT_RGB48;
target_packet_size = 6;
source_packet_size = elements * 2;
break;
case 12:
case 16:
if (endian) {
avctx->pix_fmt = PIX_FMT_RGB48BE;
} else {
avctx->pix_fmt = PIX_FMT_RGB48LE;
target_packet_size = 6;
source_packet_size = elements * 2;
break;
default:
av_log(avctx, AV_LOG_ERROR, "Unsupported color depth : %d\n", bits_per_color);
if (s->picture.data[0])
avctx->release_buffer(avctx, &s->picture);
if (av_image_check_size(w, h, 0, avctx))
if (w != avctx->width || h != avctx->height)
avcodec_set_dimensions(avctx, w, h);
if (avctx->get_buffer(avctx, p) < 0) {
av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n");
buf = avpkt->data + offset;
ptr = p->data[0];
stride = p->linesize[0];
switch (bits_per_color) {
case 10:
for (x = 0; x < avctx->height; x++) {
uint16_t *dst = (uint16_t*)ptr;
for (y = 0; y < avctx->width; y++) {
rgbBuffer = read32(&buf, endian);
*dst++ = make_16bit(rgbBuffer >> 16);
*dst++ = make_16bit(rgbBuffer >> 6);
*dst++ = make_16bit(rgbBuffer << 4);
ptr += stride;
break;
case 8:
case 12:
case 16:
if (source_packet_size == target_packet_size) {
for (x = 0; x < avctx->height; x++) {
memcpy(ptr, buf, target_packet_size*avctx->width);
ptr += stride;
buf += source_packet_size*avctx->width;
} else {
for (x = 0; x < avctx->height; x++) {
uint8_t *dst = ptr;
for (y = 0; y < avctx->width; y++) {
memcpy(dst, buf, target_packet_size);
dst += target_packet_size;
buf += source_packet_size;
ptr += stride;
break;
*picture = s->picture;
*data_size = sizeof(AVPicture);
return buf_size;
| 1threat |
static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
{
int nearest_delta_ms = (delta + 999999) / 1000000;
if (nearest_delta_ms < 1) {
nearest_delta_ms = 1;
}
timeKillEvent(mm_timer);
mm_timer = timeSetEvent(nearest_delta_ms,
mm_period,
mm_alarm_handler,
(DWORD_PTR)t,
TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
if (!mm_timer) {
fprintf(stderr, "Failed to re-arm win32 alarm timer %ld\n",
GetLastError());
timeEndPeriod(mm_period);
exit(1);
}
}
| 1threat |
static void patch_reloc(uint8_t *code_ptr, int type,
tcg_target_long value, tcg_target_long addend)
{
value += addend;
switch (type) {
case R_SPARC_32:
if (value != (uint32_t)value)
tcg_abort();
*(uint32_t *)code_ptr = value;
break;
case R_SPARC_WDISP22:
value -= (long)code_ptr;
value >>= 2;
if (!check_fit(value, 22))
tcg_abort();
*(uint32_t *)code_ptr = ((*(uint32_t *)code_ptr) & ~0x3fffff) | value;
break;
default:
tcg_abort();
}
}
| 1threat |
How Can I used the ProgressBar with Async? : Controlling the progress bar with the backgroundworker made my project difficult after a certain place. In this case, I decided to move on async structure, I built the architecture on the async structure. But this time I did not know how to control the progress bar under the async structure. What are your ideas and opinions on this issue?
private async void button3_Click(object sender, EventArgs e)
{
progressBar1.Value = 1;
int value = 1;
await ProgressBarControl(value);
await Convert();
}
public Task ProgressBarControl(int e)
{
return Task.Run(() =>
{
var progress = new Progress<int>(percent =>
{
progressBar1.Value = percent;
});
});
}
But it is not workingç I am Used backgroundworker. I was asking thşs question. But you guys suggested to me backgroundworker. But backgroundworker, after a while have a error and system is not answered to me?
What can ı do this please help me! | 0debug |
static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
Error **errp)
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs = blk_bs(blk);
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(blk_name(blk));
info->type = g_strdup("unknown");
info->locked = blk_dev_is_medium_locked(blk);
info->removable = blk_dev_has_removable_media(blk);
if (blk_dev_has_removable_media(blk)) {
info->has_tray_open = true;
info->tray_open = blk_dev_is_tray_open(blk);
}
if (bdrv_iostatus_is_enabled(bs)) {
info->has_io_status = true;
info->io_status = bs->iostatus;
}
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
}
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
*p_info = info;
return;
err:
qapi_free_BlockInfo(info);
}
| 1threat |
Access struct2 variable in javascript : I have set a strut varaible in the jsp.
<s:set var="actionName" value="stockCountFind.action" scope="request" />
In the javascript function I access it using
function getAttributeAndGoToAction()
{
var actionName = <s:property value="#actionName"/>;
alert(actionName);
}
But when I try to call the js function. It says it isn't defined. | 0debug |
Applying a function along a numpy array : <p>I've the following numpy ndarray.</p>
<pre><code>[ -0.54761371 17.04850603 4.86054302]
</code></pre>
<p>I want to apply this function to all elements of the array</p>
<pre><code>def sigmoid(x):
return 1 / (1 + math.exp(-x))
probabilities = np.apply_along_axis(sigmoid, -1, scores)
</code></pre>
<p>This is the error that I get.</p>
<pre><code>TypeError: only length-1 arrays can be converted to Python scalars
</code></pre>
<p>What am I doing wrong.</p>
| 0debug |
static int disas_thumb2_insn(CPUState *env, DisasContext *s, uint16_t insn_hw1)
{
uint32_t insn, imm, shift, offset;
uint32_t rd, rn, rm, rs;
TCGv tmp;
TCGv tmp2;
TCGv tmp3;
TCGv addr;
TCGv_i64 tmp64;
int op;
int shiftop;
int conds;
int logic_cc;
if (!(arm_feature(env, ARM_FEATURE_THUMB2)
|| arm_feature (env, ARM_FEATURE_M))) {
insn = insn_hw1;
if ((insn & (1 << 12)) == 0) {
offset = ((insn & 0x7ff) << 1);
tmp = load_reg(s, 14);
tcg_gen_addi_i32(tmp, tmp, offset);
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
tmp2 = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp2, s->pc | 1);
store_reg(s, 14, tmp2);
gen_bx(s, tmp);
return 0;
}
if (insn & (1 << 11)) {
offset = ((insn & 0x7ff) << 1) | 1;
tmp = load_reg(s, 14);
tcg_gen_addi_i32(tmp, tmp, offset);
tmp2 = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp2, s->pc | 1);
store_reg(s, 14, tmp2);
gen_bx(s, tmp);
return 0;
}
if ((s->pc & ~TARGET_PAGE_MASK) == 0) {
offset = ((int32_t)insn << 21) >> 9;
tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + offset);
return 0;
}
}
insn = lduw_code(s->pc);
s->pc += 2;
insn |= (uint32_t)insn_hw1 << 16;
if ((insn & 0xf800e800) != 0xf000e800) {
ARCH(6T2);
}
rn = (insn >> 16) & 0xf;
rs = (insn >> 12) & 0xf;
rd = (insn >> 8) & 0xf;
rm = insn & 0xf;
switch ((insn >> 25) & 0xf) {
case 0: case 1: case 2: case 3:
abort();
case 4:
if (insn & (1 << 22)) {
if (insn & 0x01200000) {
if (rn == 15) {
addr = tcg_temp_new_i32();
tcg_gen_movi_i32(addr, s->pc & ~3);
} else {
addr = load_reg(s, rn);
}
offset = (insn & 0xff) * 4;
if ((insn & (1 << 23)) == 0)
offset = -offset;
if (insn & (1 << 24)) {
tcg_gen_addi_i32(addr, addr, offset);
offset = 0;
}
if (insn & (1 << 20)) {
tmp = gen_ld32(addr, IS_USER(s));
store_reg(s, rs, tmp);
tcg_gen_addi_i32(addr, addr, 4);
tmp = gen_ld32(addr, IS_USER(s));
store_reg(s, rd, tmp);
} else {
tmp = load_reg(s, rs);
gen_st32(tmp, addr, IS_USER(s));
tcg_gen_addi_i32(addr, addr, 4);
tmp = load_reg(s, rd);
gen_st32(tmp, addr, IS_USER(s));
}
if (insn & (1 << 21)) {
if (rn == 15)
goto illegal_op;
tcg_gen_addi_i32(addr, addr, offset - 4);
store_reg(s, rn, addr);
} else {
tcg_temp_free_i32(addr);
}
} else if ((insn & (1 << 23)) == 0) {
addr = tcg_temp_local_new();
load_reg_var(s, addr, rn);
tcg_gen_addi_i32(addr, addr, (insn & 0xff) << 2);
if (insn & (1 << 20)) {
gen_load_exclusive(s, rs, 15, addr, 2);
} else {
gen_store_exclusive(s, rd, rs, 15, addr, 2);
}
tcg_temp_free(addr);
} else if ((insn & (1 << 6)) == 0) {
if (rn == 15) {
addr = tcg_temp_new_i32();
tcg_gen_movi_i32(addr, s->pc);
} else {
addr = load_reg(s, rn);
}
tmp = load_reg(s, rm);
tcg_gen_add_i32(addr, addr, tmp);
if (insn & (1 << 4)) {
tcg_gen_add_i32(addr, addr, tmp);
tcg_temp_free_i32(tmp);
tmp = gen_ld16u(addr, IS_USER(s));
} else {
tcg_temp_free_i32(tmp);
tmp = gen_ld8u(addr, IS_USER(s));
}
tcg_temp_free_i32(addr);
tcg_gen_shli_i32(tmp, tmp, 1);
tcg_gen_addi_i32(tmp, tmp, s->pc);
store_reg(s, 15, tmp);
} else {
ARCH(7);
op = (insn >> 4) & 0x3;
if (op == 2) {
goto illegal_op;
}
addr = tcg_temp_local_new();
load_reg_var(s, addr, rn);
if (insn & (1 << 20)) {
gen_load_exclusive(s, rs, rd, addr, op);
} else {
gen_store_exclusive(s, rm, rs, rd, addr, op);
}
tcg_temp_free(addr);
}
} else {
if (((insn >> 23) & 1) == ((insn >> 24) & 1)) {
if (IS_USER(s))
goto illegal_op;
if (insn & (1 << 20)) {
addr = load_reg(s, rn);
if ((insn & (1 << 24)) == 0)
tcg_gen_addi_i32(addr, addr, -8);
tmp = gen_ld32(addr, 0);
tcg_gen_addi_i32(addr, addr, 4);
tmp2 = gen_ld32(addr, 0);
if (insn & (1 << 21)) {
if (insn & (1 << 24)) {
tcg_gen_addi_i32(addr, addr, 4);
} else {
tcg_gen_addi_i32(addr, addr, -4);
}
store_reg(s, rn, addr);
} else {
tcg_temp_free_i32(addr);
}
gen_rfe(s, tmp, tmp2);
} else {
op = (insn & 0x1f);
addr = tcg_temp_new_i32();
tmp = tcg_const_i32(op);
gen_helper_get_r13_banked(addr, cpu_env, tmp);
tcg_temp_free_i32(tmp);
if ((insn & (1 << 24)) == 0) {
tcg_gen_addi_i32(addr, addr, -8);
}
tmp = load_reg(s, 14);
gen_st32(tmp, addr, 0);
tcg_gen_addi_i32(addr, addr, 4);
tmp = tcg_temp_new_i32();
gen_helper_cpsr_read(tmp);
gen_st32(tmp, addr, 0);
if (insn & (1 << 21)) {
if ((insn & (1 << 24)) == 0) {
tcg_gen_addi_i32(addr, addr, -4);
} else {
tcg_gen_addi_i32(addr, addr, 4);
}
tmp = tcg_const_i32(op);
gen_helper_set_r13_banked(cpu_env, tmp, addr);
tcg_temp_free_i32(tmp);
} else {
tcg_temp_free_i32(addr);
}
}
} else {
int i;
addr = load_reg(s, rn);
offset = 0;
for (i = 0; i < 16; i++) {
if (insn & (1 << i))
offset += 4;
}
if (insn & (1 << 24)) {
tcg_gen_addi_i32(addr, addr, -offset);
}
for (i = 0; i < 16; i++) {
if ((insn & (1 << i)) == 0)
continue;
if (insn & (1 << 20)) {
tmp = gen_ld32(addr, IS_USER(s));
if (i == 15) {
gen_bx(s, tmp);
} else {
store_reg(s, i, tmp);
}
} else {
tmp = load_reg(s, i);
gen_st32(tmp, addr, IS_USER(s));
}
tcg_gen_addi_i32(addr, addr, 4);
}
if (insn & (1 << 21)) {
if (insn & (1 << 24)) {
tcg_gen_addi_i32(addr, addr, -offset);
}
if (insn & (1 << rn))
goto illegal_op;
store_reg(s, rn, addr);
} else {
tcg_temp_free_i32(addr);
}
}
}
break;
case 5:
op = (insn >> 21) & 0xf;
if (op == 6) {
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
shift = ((insn >> 10) & 0x1c) | ((insn >> 6) & 0x3);
if (insn & (1 << 5)) {
if (shift == 0)
shift = 31;
tcg_gen_sari_i32(tmp2, tmp2, shift);
tcg_gen_andi_i32(tmp, tmp, 0xffff0000);
tcg_gen_ext16u_i32(tmp2, tmp2);
} else {
if (shift)
tcg_gen_shli_i32(tmp2, tmp2, shift);
tcg_gen_ext16u_i32(tmp, tmp);
tcg_gen_andi_i32(tmp2, tmp2, 0xffff0000);
}
tcg_gen_or_i32(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
store_reg(s, rd, tmp);
} else {
if (rn == 15) {
tmp = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp, 0);
} else {
tmp = load_reg(s, rn);
}
tmp2 = load_reg(s, rm);
shiftop = (insn >> 4) & 3;
shift = ((insn >> 6) & 3) | ((insn >> 10) & 0x1c);
conds = (insn & (1 << 20)) != 0;
logic_cc = (conds && thumb2_logic_op(op));
gen_arm_shift_im(tmp2, shiftop, shift, logic_cc);
if (gen_thumb2_data_op(s, op, conds, 0, tmp, tmp2))
goto illegal_op;
tcg_temp_free_i32(tmp2);
if (rd != 15) {
store_reg(s, rd, tmp);
} else {
tcg_temp_free_i32(tmp);
}
}
break;
case 13:
op = ((insn >> 22) & 6) | ((insn >> 7) & 1);
if (op < 4 && (insn & 0xf000) != 0xf000)
goto illegal_op;
switch (op) {
case 0:
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
if ((insn & 0x70) != 0)
goto illegal_op;
op = (insn >> 21) & 3;
logic_cc = (insn & (1 << 20)) != 0;
gen_arm_shift_reg(tmp, op, tmp2, logic_cc);
if (logic_cc)
gen_logic_CC(tmp);
store_reg_bx(env, s, rd, tmp);
break;
case 1:
tmp = load_reg(s, rm);
shift = (insn >> 4) & 3;
if (shift != 0)
tcg_gen_rotri_i32(tmp, tmp, shift * 8);
op = (insn >> 20) & 7;
switch (op) {
case 0: gen_sxth(tmp); break;
case 1: gen_uxth(tmp); break;
case 2: gen_sxtb16(tmp); break;
case 3: gen_uxtb16(tmp); break;
case 4: gen_sxtb(tmp); break;
case 5: gen_uxtb(tmp); break;
default: goto illegal_op;
}
if (rn != 15) {
tmp2 = load_reg(s, rn);
if ((op >> 1) == 1) {
gen_add16(tmp, tmp2);
} else {
tcg_gen_add_i32(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
}
store_reg(s, rd, tmp);
break;
case 2:
op = (insn >> 20) & 7;
shift = (insn >> 4) & 7;
if ((op & 3) == 3 || (shift & 3) == 3)
goto illegal_op;
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
gen_thumb2_parallel_addsub(op, shift, tmp, tmp2);
tcg_temp_free_i32(tmp2);
store_reg(s, rd, tmp);
break;
case 3:
op = ((insn >> 17) & 0x38) | ((insn >> 4) & 7);
if (op < 4) {
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
if (op & 1)
gen_helper_double_saturate(tmp, tmp);
if (op & 2)
gen_helper_sub_saturate(tmp, tmp2, tmp);
else
gen_helper_add_saturate(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
} else {
tmp = load_reg(s, rn);
switch (op) {
case 0x0a:
gen_helper_rbit(tmp, tmp);
break;
case 0x08:
tcg_gen_bswap32_i32(tmp, tmp);
break;
case 0x09:
gen_rev16(tmp);
break;
case 0x0b:
gen_revsh(tmp);
break;
case 0x10:
tmp2 = load_reg(s, rm);
tmp3 = tcg_temp_new_i32();
tcg_gen_ld_i32(tmp3, cpu_env, offsetof(CPUState, GE));
gen_helper_sel_flags(tmp, tmp3, tmp, tmp2);
tcg_temp_free_i32(tmp3);
tcg_temp_free_i32(tmp2);
break;
case 0x18:
gen_helper_clz(tmp, tmp);
break;
default:
goto illegal_op;
}
}
store_reg(s, rd, tmp);
break;
case 4: case 5:
op = (insn >> 4) & 0xf;
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
switch ((insn >> 20) & 7) {
case 0:
tcg_gen_mul_i32(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
if (rs != 15) {
tmp2 = load_reg(s, rs);
if (op)
tcg_gen_sub_i32(tmp, tmp2, tmp);
else
tcg_gen_add_i32(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
break;
case 1:
gen_mulxy(tmp, tmp2, op & 2, op & 1);
tcg_temp_free_i32(tmp2);
if (rs != 15) {
tmp2 = load_reg(s, rs);
gen_helper_add_setq(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
break;
case 2:
case 4:
if (op)
gen_swap_half(tmp2);
gen_smul_dual(tmp, tmp2);
if (insn & (1 << 22)) {
tcg_gen_sub_i32(tmp, tmp, tmp2);
} else {
tcg_gen_add_i32(tmp, tmp, tmp2);
}
tcg_temp_free_i32(tmp2);
if (rs != 15)
{
tmp2 = load_reg(s, rs);
gen_helper_add_setq(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
break;
case 3:
if (op)
tcg_gen_sari_i32(tmp2, tmp2, 16);
else
gen_sxth(tmp2);
tmp64 = gen_muls_i64_i32(tmp, tmp2);
tcg_gen_shri_i64(tmp64, tmp64, 16);
tmp = tcg_temp_new_i32();
tcg_gen_trunc_i64_i32(tmp, tmp64);
tcg_temp_free_i64(tmp64);
if (rs != 15)
{
tmp2 = load_reg(s, rs);
gen_helper_add_setq(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
break;
case 5: case 6:
tmp64 = gen_muls_i64_i32(tmp, tmp2);
if (rs != 15) {
tmp = load_reg(s, rs);
if (insn & (1 << 20)) {
tmp64 = gen_addq_msw(tmp64, tmp);
} else {
tmp64 = gen_subq_msw(tmp64, tmp);
}
}
if (insn & (1 << 4)) {
tcg_gen_addi_i64(tmp64, tmp64, 0x80000000u);
}
tcg_gen_shri_i64(tmp64, tmp64, 32);
tmp = tcg_temp_new_i32();
tcg_gen_trunc_i64_i32(tmp, tmp64);
tcg_temp_free_i64(tmp64);
break;
case 7:
gen_helper_usad8(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
if (rs != 15) {
tmp2 = load_reg(s, rs);
tcg_gen_add_i32(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
}
break;
}
store_reg(s, rd, tmp);
break;
case 6: case 7:
op = ((insn >> 4) & 0xf) | ((insn >> 16) & 0x70);
tmp = load_reg(s, rn);
tmp2 = load_reg(s, rm);
if ((op & 0x50) == 0x10) {
if (!arm_feature(env, ARM_FEATURE_DIV))
goto illegal_op;
if (op & 0x20)
gen_helper_udiv(tmp, tmp, tmp2);
else
gen_helper_sdiv(tmp, tmp, tmp2);
tcg_temp_free_i32(tmp2);
store_reg(s, rd, tmp);
} else if ((op & 0xe) == 0xc) {
if (op & 1)
gen_swap_half(tmp2);
gen_smul_dual(tmp, tmp2);
if (op & 0x10) {
tcg_gen_sub_i32(tmp, tmp, tmp2);
} else {
tcg_gen_add_i32(tmp, tmp, tmp2);
}
tcg_temp_free_i32(tmp2);
tmp64 = tcg_temp_new_i64();
tcg_gen_ext_i32_i64(tmp64, tmp);
tcg_temp_free_i32(tmp);
gen_addq(s, tmp64, rs, rd);
gen_storeq_reg(s, rs, rd, tmp64);
tcg_temp_free_i64(tmp64);
} else {
if (op & 0x20) {
tmp64 = gen_mulu_i64_i32(tmp, tmp2);
} else {
if (op & 8) {
gen_mulxy(tmp, tmp2, op & 2, op & 1);
tcg_temp_free_i32(tmp2);
tmp64 = tcg_temp_new_i64();
tcg_gen_ext_i32_i64(tmp64, tmp);
tcg_temp_free_i32(tmp);
} else {
tmp64 = gen_muls_i64_i32(tmp, tmp2);
}
}
if (op & 4) {
gen_addq_lo(s, tmp64, rs);
gen_addq_lo(s, tmp64, rd);
} else if (op & 0x40) {
gen_addq(s, tmp64, rs, rd);
}
gen_storeq_reg(s, rs, rd, tmp64);
tcg_temp_free_i64(tmp64);
}
break;
}
break;
case 6: case 7: case 14: case 15:
if (((insn >> 24) & 3) == 3) {
insn = (insn & 0xe2ffffff) | ((insn & (1 << 28)) >> 4) | (1 << 28);
if (disas_neon_data_insn(env, s, insn))
goto illegal_op;
} else {
if (insn & (1 << 28))
goto illegal_op;
if (disas_coproc_insn (env, s, insn))
goto illegal_op;
}
break;
case 8: case 9: case 10: case 11:
if (insn & (1 << 15)) {
if (insn & 0x5000) {
offset = ((int32_t)insn << 5) >> 9 & ~(int32_t)0xfff;
offset |= (insn & 0x7ff) << 1;
offset ^= ((~insn) & (1 << 13)) << 10;
offset ^= ((~insn) & (1 << 11)) << 11;
if (insn & (1 << 14)) {
tcg_gen_movi_i32(cpu_R[14], s->pc | 1);
}
offset += s->pc;
if (insn & (1 << 12)) {
gen_jmp(s, offset);
} else {
offset &= ~(uint32_t)2;
gen_bx_im(s, offset);
}
} else if (((insn >> 23) & 7) == 7) {
if (insn & (1 << 13))
goto illegal_op;
if (insn & (1 << 26)) {
goto illegal_op;
} else {
op = (insn >> 20) & 7;
switch (op) {
case 0:
if (IS_M(env)) {
tmp = load_reg(s, rn);
addr = tcg_const_i32(insn & 0xff);
gen_helper_v7m_msr(cpu_env, addr, tmp);
tcg_temp_free_i32(addr);
tcg_temp_free_i32(tmp);
gen_lookup_tb(s);
break;
}
case 1:
if (IS_M(env))
goto illegal_op;
tmp = load_reg(s, rn);
if (gen_set_psr(s,
msr_mask(env, s, (insn >> 8) & 0xf, op == 1),
op == 1, tmp))
goto illegal_op;
break;
case 2:
if (((insn >> 8) & 7) == 0) {
gen_nop_hint(s, insn & 0xff);
}
if (IS_USER(s))
break;
offset = 0;
imm = 0;
if (insn & (1 << 10)) {
if (insn & (1 << 7))
offset |= CPSR_A;
if (insn & (1 << 6))
offset |= CPSR_I;
if (insn & (1 << 5))
offset |= CPSR_F;
if (insn & (1 << 9))
imm = CPSR_A | CPSR_I | CPSR_F;
}
if (insn & (1 << 8)) {
offset |= 0x1f;
imm |= (insn & 0x1f);
}
if (offset) {
gen_set_psr_im(s, offset, 0, imm);
}
break;
case 3:
ARCH(7);
op = (insn >> 4) & 0xf;
switch (op) {
case 2:
gen_clrex(s);
break;
case 4:
case 5:
case 6:
break;
default:
goto illegal_op;
}
break;
case 4:
tmp = load_reg(s, rn);
gen_bx(s, tmp);
break;
case 5:
if (IS_USER(s)) {
goto illegal_op;
}
if (rn != 14 || rd != 15) {
goto illegal_op;
}
tmp = load_reg(s, rn);
tcg_gen_subi_i32(tmp, tmp, insn & 0xff);
gen_exception_return(s, tmp);
break;
case 6:
tmp = tcg_temp_new_i32();
if (IS_M(env)) {
addr = tcg_const_i32(insn & 0xff);
gen_helper_v7m_mrs(tmp, cpu_env, addr);
tcg_temp_free_i32(addr);
} else {
gen_helper_cpsr_read(tmp);
}
store_reg(s, rd, tmp);
break;
case 7:
if (IS_USER(s) || IS_M(env))
goto illegal_op;
tmp = load_cpu_field(spsr);
store_reg(s, rd, tmp);
break;
}
}
} else {
op = (insn >> 22) & 0xf;
s->condlabel = gen_new_label();
gen_test_cc(op ^ 1, s->condlabel);
s->condjmp = 1;
offset = (insn & 0x7ff) << 1;
offset |= (insn & 0x003f0000) >> 4;
offset |= ((int32_t)((insn << 5) & 0x80000000)) >> 11;
offset |= (insn & (1 << 13)) << 5;
offset |= (insn & (1 << 11)) << 8;
gen_jmp(s, s->pc + offset);
}
} else {
if (insn & (1 << 25)) {
if (insn & (1 << 24)) {
if (insn & (1 << 20))
goto illegal_op;
op = (insn >> 21) & 7;
imm = insn & 0x1f;
shift = ((insn >> 6) & 3) | ((insn >> 10) & 0x1c);
if (rn == 15) {
tmp = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp, 0);
} else {
tmp = load_reg(s, rn);
}
switch (op) {
case 2:
imm++;
if (shift + imm > 32)
goto illegal_op;
if (imm < 32)
gen_sbfx(tmp, shift, imm);
break;
case 6:
imm++;
if (shift + imm > 32)
goto illegal_op;
if (imm < 32)
gen_ubfx(tmp, shift, (1u << imm) - 1);
break;
case 3:
if (imm < shift)
goto illegal_op;
imm = imm + 1 - shift;
if (imm != 32) {
tmp2 = load_reg(s, rd);
gen_bfi(tmp, tmp2, tmp, shift, (1u << imm) - 1);
tcg_temp_free_i32(tmp2);
}
break;
case 7:
goto illegal_op;
default:
if (shift) {
if (op & 1)
tcg_gen_sari_i32(tmp, tmp, shift);
else
tcg_gen_shli_i32(tmp, tmp, shift);
}
tmp2 = tcg_const_i32(imm);
if (op & 4) {
if ((op & 1) && shift == 0)
gen_helper_usat16(tmp, tmp, tmp2);
else
gen_helper_usat(tmp, tmp, tmp2);
} else {
if ((op & 1) && shift == 0)
gen_helper_ssat16(tmp, tmp, tmp2);
else
gen_helper_ssat(tmp, tmp, tmp2);
}
tcg_temp_free_i32(tmp2);
break;
}
store_reg(s, rd, tmp);
} else {
imm = ((insn & 0x04000000) >> 15)
| ((insn & 0x7000) >> 4) | (insn & 0xff);
if (insn & (1 << 22)) {
imm |= (insn >> 4) & 0xf000;
if (insn & (1 << 23)) {
tmp = load_reg(s, rd);
tcg_gen_ext16u_i32(tmp, tmp);
tcg_gen_ori_i32(tmp, tmp, imm << 16);
} else {
tmp = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp, imm);
}
} else {
if (rn == 15) {
offset = s->pc & ~(uint32_t)3;
if (insn & (1 << 23))
offset -= imm;
else
offset += imm;
tmp = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp, offset);
} else {
tmp = load_reg(s, rn);
if (insn & (1 << 23))
tcg_gen_subi_i32(tmp, tmp, imm);
else
tcg_gen_addi_i32(tmp, tmp, imm);
}
}
store_reg(s, rd, tmp);
}
} else {
int shifter_out = 0;
shift = ((insn & 0x04000000) >> 23) | ((insn & 0x7000) >> 12);
imm = (insn & 0xff);
switch (shift) {
case 0:
break;
case 1:
imm |= imm << 16;
break;
case 2:
imm |= imm << 16;
imm <<= 8;
break;
case 3:
imm |= imm << 16;
imm |= imm << 8;
break;
default:
shift = (shift << 1) | (imm >> 7);
imm |= 0x80;
imm = imm << (32 - shift);
shifter_out = 1;
break;
}
tmp2 = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp2, imm);
rn = (insn >> 16) & 0xf;
if (rn == 15) {
tmp = tcg_temp_new_i32();
tcg_gen_movi_i32(tmp, 0);
} else {
tmp = load_reg(s, rn);
}
op = (insn >> 21) & 0xf;
if (gen_thumb2_data_op(s, op, (insn & (1 << 20)) != 0,
shifter_out, tmp, tmp2))
goto illegal_op;
tcg_temp_free_i32(tmp2);
rd = (insn >> 8) & 0xf;
if (rd != 15) {
store_reg(s, rd, tmp);
} else {
tcg_temp_free_i32(tmp);
}
}
}
break;
case 12:
{
int postinc = 0;
int writeback = 0;
int user;
if ((insn & 0x01100000) == 0x01000000) {
if (disas_neon_ls_insn(env, s, insn))
goto illegal_op;
break;
}
op = ((insn >> 21) & 3) | ((insn >> 22) & 4);
if (rs == 15) {
if (!(insn & (1 << 20))) {
goto illegal_op;
}
if (op != 2) {
int op1 = (insn >> 23) & 3;
int op2 = (insn >> 6) & 0x3f;
if (op & 2) {
goto illegal_op;
}
if (rn == 15) {
return 0;
}
if (op1 & 1) {
return 0;
}
if ((op2 == 0) || ((op2 & 0x3c) == 0x30)) {
return 0;
}
return 1;
}
}
user = IS_USER(s);
if (rn == 15) {
addr = tcg_temp_new_i32();
imm = s->pc & 0xfffffffc;
if (insn & (1 << 23))
imm += insn & 0xfff;
else
imm -= insn & 0xfff;
tcg_gen_movi_i32(addr, imm);
} else {
addr = load_reg(s, rn);
if (insn & (1 << 23)) {
imm = insn & 0xfff;
tcg_gen_addi_i32(addr, addr, imm);
} else {
imm = insn & 0xff;
switch ((insn >> 8) & 0xf) {
case 0x0:
shift = (insn >> 4) & 0xf;
if (shift > 3) {
tcg_temp_free_i32(addr);
goto illegal_op;
}
tmp = load_reg(s, rm);
if (shift)
tcg_gen_shli_i32(tmp, tmp, shift);
tcg_gen_add_i32(addr, addr, tmp);
tcg_temp_free_i32(tmp);
break;
case 0xc:
tcg_gen_addi_i32(addr, addr, -imm);
break;
case 0xe:
tcg_gen_addi_i32(addr, addr, imm);
user = 1;
break;
case 0x9:
imm = -imm;
case 0xb:
postinc = 1;
writeback = 1;
break;
case 0xd:
imm = -imm;
case 0xf:
tcg_gen_addi_i32(addr, addr, imm);
writeback = 1;
break;
default:
tcg_temp_free_i32(addr);
goto illegal_op;
}
}
}
if (insn & (1 << 20)) {
switch (op) {
case 0: tmp = gen_ld8u(addr, user); break;
case 4: tmp = gen_ld8s(addr, user); break;
case 1: tmp = gen_ld16u(addr, user); break;
case 5: tmp = gen_ld16s(addr, user); break;
case 2: tmp = gen_ld32(addr, user); break;
default:
tcg_temp_free_i32(addr);
goto illegal_op;
}
if (rs == 15) {
gen_bx(s, tmp);
} else {
store_reg(s, rs, tmp);
}
} else {
tmp = load_reg(s, rs);
switch (op) {
case 0: gen_st8(tmp, addr, user); break;
case 1: gen_st16(tmp, addr, user); break;
case 2: gen_st32(tmp, addr, user); break;
default:
tcg_temp_free_i32(addr);
goto illegal_op;
}
}
if (postinc)
tcg_gen_addi_i32(addr, addr, imm);
if (writeback) {
store_reg(s, rn, addr);
} else {
tcg_temp_free_i32(addr);
}
}
break;
default:
goto illegal_op;
}
return 0;
illegal_op:
return 1;
}
| 1threat |
merge two key values and assign it to a new key in a single dictionary python : I am new to python. I have dictionary with some values stored in the form of list. I want to combine two key values and assign the value to the new key. I tried searching stack overflow and got various results in combining the values of two different dicts. But i need to combine in a single dict. So how to merge two keys and assign it to a new key in a single dictionary
Here's the sample code:
fields = ["Classification","Fuel_Type"] #two fields to combine
target = "Classification_Fuel_Type"
d = [
{'Fuel': 'Gas', 'Gears': 6, 'Width': 209, 'Year': 2012, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'},
{'Fuel': 'E85', 'Gears': 5, 'Width': 209, 'Year': 2014, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'},
{'Fuel': 'E85', 'Gears': 6, 'Width': 509, 'Year': 2011, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'}]
Required output:
[{'Classification_Fuel_Type':'Automatic transmissionGas','Fuel': 'Gas', 'Gears': 6, 'Width': 209, 'Year': 2012, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'},
{'Classification_Fuel_Type':'Automatic transmissionE85'.'Fuel': 'E85', 'Gears': 5, 'Width': 209, 'Year': 2014, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'},
{'Classification_Fuel_Type':'Automatic transmissionE85','Fuel': 'E85', 'Gears': 6, 'Width': 509, 'Year': 2011, 'Engine': 'Lincoln 5.4L 8 Cylinder 310 hp 365 ft-lbs FFV', 'Classification': 'Automatic transmission'}]
| 0debug |
Is there a hdfs command to list files in HDFS directory as per timestamp : <p>Is there a hdfs command to list files in HDFS directory as per timestamp, ascending or descending? By default, <code>hdfs dfs -ls</code> command gives unsorted list of files. </p>
<p>When I searched for answers what I got was a workaround i.e. <code>hdfs dfs -ls /tmp | sort -k6,7</code>. But is there any better way, inbuilt in <code>hdfs dfs</code> commandline?</p>
| 0debug |
How to create page template in wordpress? : <p>I want to make a custom page template in wordpress to customized all section for a specific page. In my fully custom made theme using bootstrap.</p>
| 0debug |
How can I store and retrieve data in VSTO outlook addin? : <p>What is the best method to store and retrieve some data (like key-value pair)in VSTO outlook addin project?</p>
| 0debug |
Unknown attributes in android xml : I mean the android attributes like width, height, id and ... .
I did everything:
1. Invalidate cache
2. clean & rebuild project
3. delete .idea folder
and all the other answers that we're suggested. None worked for me.
What did work was downgrading the sdk version from 27 to 26.
Any clues what might have happened to my 27 sdk version? | 0debug |
int64_t avio_seek(AVIOContext *s, int64_t offset, int whence)
{
int64_t offset1;
int64_t pos;
int force = whence & AVSEEK_FORCE;
int buffer_size;
int short_seek;
whence &= ~AVSEEK_FORCE;
if(!s)
buffer_size = s->buf_end - s->buffer;
pos = s->pos - (s->write_flag ? 0 : buffer_size);
if (whence != SEEK_CUR && whence != SEEK_SET)
if (whence == SEEK_CUR) {
offset1 = pos + (s->buf_ptr - s->buffer);
if (offset == 0)
return offset1;
offset += offset1;
}
if (offset < 0)
if (s->short_seek_get) {
short_seek = s->short_seek_get(s->opaque);
if (short_seek <= 0)
short_seek = s->short_seek_threshold;
} else
short_seek = s->short_seek_threshold;
offset1 = offset - pos;
s->buf_ptr_max = FFMAX(s->buf_ptr_max, s->buf_ptr);
if ((!s->direct || !s->seek) &&
offset1 >= 0 && offset1 <= (s->write_flag ? s->buf_ptr_max - s->buffer : buffer_size)) {
s->buf_ptr = s->buffer + offset1;
} else if ((!(s->seekable & AVIO_SEEKABLE_NORMAL) ||
offset1 <= buffer_size + short_seek) &&
!s->write_flag && offset1 >= 0 &&
(!s->direct || !s->seek) &&
(whence != SEEK_END || force)) {
while(s->pos < offset && !s->eof_reached)
fill_buffer(s);
if (s->eof_reached)
return AVERROR_EOF;
s->buf_ptr = s->buf_end - (s->pos - offset);
} else if(!s->write_flag && offset1 < 0 && -offset1 < buffer_size>>1 && s->seek && offset > 0) {
int64_t res;
pos -= FFMIN(buffer_size>>1, pos);
if ((res = s->seek(s->opaque, pos, SEEK_SET)) < 0)
return res;
s->buf_end =
s->buf_ptr = s->buffer;
s->pos = pos;
s->eof_reached = 0;
fill_buffer(s);
return avio_seek(s, offset, SEEK_SET | force);
} else {
int64_t res;
if (s->write_flag) {
flush_buffer(s);
}
if (!s->seek)
return AVERROR(EPIPE);
if ((res = s->seek(s->opaque, offset, SEEK_SET)) < 0)
return res;
s->seek_count ++;
if (!s->write_flag)
s->buf_end = s->buffer;
s->buf_ptr = s->buf_ptr_max = s->buffer;
s->pos = offset;
}
s->eof_reached = 0;
return offset;
} | 1threat |
static void dv_decode_ac(GetBitContext *gb, BlockInfo *mb, DCTELEM *block)
{
int last_index = gb->size_in_bits;
const uint8_t *scan_table = mb->scan_table;
const uint32_t *factor_table = mb->factor_table;
int pos = mb->pos;
int partial_bit_count = mb->partial_bit_count;
int level, run, vlc_len, index;
OPEN_READER(re, gb);
UPDATE_CACHE(re, gb);
if (partial_bit_count > 0) {
re_cache = ((unsigned)re_cache >> partial_bit_count) |
(mb->partial_bit_buffer << (sizeof(re_cache) * 8 - partial_bit_count));
re_index -= partial_bit_count;
mb->partial_bit_count = 0;
}
for (;;) {
av_dlog(NULL, "%2d: bits=%04x index=%d\n", pos, SHOW_UBITS(re, gb, 16),
re_index);
index = NEG_USR32(re_cache, TEX_VLC_BITS);
vlc_len = dv_rl_vlc[index].len;
if (vlc_len < 0) {
index = NEG_USR32((unsigned)re_cache << TEX_VLC_BITS, -vlc_len) + dv_rl_vlc[index].level;
vlc_len = TEX_VLC_BITS - vlc_len;
}
level = dv_rl_vlc[index].level;
run = dv_rl_vlc[index].run;
if (re_index + vlc_len > last_index) {
mb->partial_bit_count = last_index - re_index;
mb->partial_bit_buffer = NEG_USR32(re_cache, mb->partial_bit_count);
re_index = last_index;
break;
}
re_index += vlc_len;
av_dlog(NULL, "run=%d level=%d\n", run, level);
pos += run;
if (pos >= 64)
break;
level = (level * factor_table[pos] + (1 << (dv_iweight_bits - 1))) >> dv_iweight_bits;
block[scan_table[pos]] = level;
UPDATE_CACHE(re, gb);
}
CLOSE_READER(re, gb);
mb->pos = pos;
}
| 1threat |
def equilibrium_index(arr):
total_sum = sum(arr)
left_sum=0
for i, num in enumerate(arr):
total_sum -= num
if left_sum == total_sum:
return i
left_sum += num
return -1 | 0debug |
How do I make a new line in swift : <p>Is there a way to have a way to make a new line in swift like "\n" for java?</p>
<pre><code>var example: String = "Hello World \n This is a new line"
</code></pre>
| 0debug |
av_cold int MPV_encode_init(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
int i;
int chroma_h_shift, chroma_v_shift;
MPV_encode_defaults(s);
switch (avctx->codec_id) {
case CODEC_ID_MPEG2VIDEO:
if(avctx->pix_fmt != PIX_FMT_YUV420P && avctx->pix_fmt != PIX_FMT_YUV422P){
av_log(avctx, AV_LOG_ERROR, "only YUV420 and YUV422 are supported\n");
return -1;
}
break;
case CODEC_ID_LJPEG:
if(avctx->pix_fmt != PIX_FMT_YUVJ420P && avctx->pix_fmt != PIX_FMT_YUVJ422P && avctx->pix_fmt != PIX_FMT_YUVJ444P && avctx->pix_fmt != PIX_FMT_RGB32 &&
((avctx->pix_fmt != PIX_FMT_YUV420P && avctx->pix_fmt != PIX_FMT_YUV422P && avctx->pix_fmt != PIX_FMT_YUV444P) || avctx->strict_std_compliance>FF_COMPLIANCE_UNOFFICIAL)){
av_log(avctx, AV_LOG_ERROR, "colorspace not supported in LJPEG\n");
return -1;
}
break;
case CODEC_ID_MJPEG:
if(avctx->pix_fmt != PIX_FMT_YUVJ420P && avctx->pix_fmt != PIX_FMT_YUVJ422P &&
((avctx->pix_fmt != PIX_FMT_YUV420P && avctx->pix_fmt != PIX_FMT_YUV422P) || avctx->strict_std_compliance>FF_COMPLIANCE_UNOFFICIAL)){
av_log(avctx, AV_LOG_ERROR, "colorspace not supported in jpeg\n");
return -1;
}
break;
default:
if(avctx->pix_fmt != PIX_FMT_YUV420P){
av_log(avctx, AV_LOG_ERROR, "only YUV420 is supported\n");
return -1;
}
}
switch (avctx->pix_fmt) {
case PIX_FMT_YUVJ422P:
case PIX_FMT_YUV422P:
s->chroma_format = CHROMA_422;
break;
case PIX_FMT_YUVJ420P:
case PIX_FMT_YUV420P:
default:
s->chroma_format = CHROMA_420;
break;
}
s->bit_rate = avctx->bit_rate;
s->width = avctx->width;
s->height = avctx->height;
if(avctx->gop_size > 600 && avctx->strict_std_compliance>FF_COMPLIANCE_EXPERIMENTAL){
av_log(avctx, AV_LOG_ERROR, "Warning keyframe interval too large! reducing it ...\n");
avctx->gop_size=600;
}
s->gop_size = avctx->gop_size;
s->avctx = avctx;
s->flags= avctx->flags;
s->flags2= avctx->flags2;
s->max_b_frames= avctx->max_b_frames;
s->codec_id= avctx->codec->id;
s->luma_elim_threshold = avctx->luma_elim_threshold;
s->chroma_elim_threshold= avctx->chroma_elim_threshold;
s->strict_std_compliance= avctx->strict_std_compliance;
s->data_partitioning= avctx->flags & CODEC_FLAG_PART;
s->quarter_sample= (avctx->flags & CODEC_FLAG_QPEL)!=0;
s->mpeg_quant= avctx->mpeg_quant;
s->rtp_mode= !!avctx->rtp_payload_size;
s->intra_dc_precision= avctx->intra_dc_precision;
s->user_specified_pts = AV_NOPTS_VALUE;
if (s->gop_size <= 1) {
s->intra_only = 1;
s->gop_size = 12;
} else {
s->intra_only = 0;
}
s->me_method = avctx->me_method;
s->fixed_qscale = !!(avctx->flags & CODEC_FLAG_QSCALE);
s->adaptive_quant= ( s->avctx->lumi_masking
|| s->avctx->dark_masking
|| s->avctx->temporal_cplx_masking
|| s->avctx->spatial_cplx_masking
|| s->avctx->p_masking
|| s->avctx->border_masking
|| (s->flags&CODEC_FLAG_QP_RD))
&& !s->fixed_qscale;
s->obmc= !!(s->flags & CODEC_FLAG_OBMC);
s->loop_filter= !!(s->flags & CODEC_FLAG_LOOP_FILTER);
s->alternate_scan= !!(s->flags & CODEC_FLAG_ALT_SCAN);
s->intra_vlc_format= !!(s->flags2 & CODEC_FLAG2_INTRA_VLC);
s->q_scale_type= !!(s->flags2 & CODEC_FLAG2_NON_LINEAR_QUANT);
if(avctx->rc_max_rate && !avctx->rc_buffer_size){
av_log(avctx, AV_LOG_ERROR, "a vbv buffer size is needed, for encoding with a maximum bitrate\n");
return -1;
}
if(avctx->rc_min_rate && avctx->rc_max_rate != avctx->rc_min_rate){
av_log(avctx, AV_LOG_INFO, "Warning min_rate > 0 but min_rate != max_rate isn't recommended!\n");
}
if(avctx->rc_min_rate && avctx->rc_min_rate > avctx->bit_rate){
av_log(avctx, AV_LOG_ERROR, "bitrate below min bitrate\n");
return -1;
}
if(avctx->rc_max_rate && avctx->rc_max_rate < avctx->bit_rate){
av_log(avctx, AV_LOG_INFO, "bitrate above max bitrate\n");
return -1;
}
if(avctx->rc_max_rate && avctx->rc_max_rate == avctx->bit_rate && avctx->rc_max_rate != avctx->rc_min_rate){
av_log(avctx, AV_LOG_INFO, "impossible bitrate constraints, this will fail\n");
}
if(avctx->rc_buffer_size && avctx->bit_rate*(int64_t)avctx->time_base.num > avctx->rc_buffer_size * (int64_t)avctx->time_base.den){
av_log(avctx, AV_LOG_ERROR, "VBV buffer too small for bitrate\n");
return -1;
}
if(!s->fixed_qscale && avctx->bit_rate*av_q2d(avctx->time_base) > avctx->bit_rate_tolerance){
av_log(avctx, AV_LOG_ERROR, "bitrate tolerance too small for bitrate\n");
return -1;
}
if( s->avctx->rc_max_rate && s->avctx->rc_min_rate == s->avctx->rc_max_rate
&& (s->codec_id == CODEC_ID_MPEG1VIDEO || s->codec_id == CODEC_ID_MPEG2VIDEO)
&& 90000LL * (avctx->rc_buffer_size-1) > s->avctx->rc_max_rate*0xFFFFLL){
av_log(avctx, AV_LOG_INFO, "Warning vbv_delay will be set to 0xFFFF (=VBR) as the specified vbv buffer is too large for the given bitrate!\n");
}
if((s->flags & CODEC_FLAG_4MV) && s->codec_id != CODEC_ID_MPEG4
&& s->codec_id != CODEC_ID_H263 && s->codec_id != CODEC_ID_H263P && s->codec_id != CODEC_ID_FLV1){
av_log(avctx, AV_LOG_ERROR, "4MV not supported by codec\n");
return -1;
}
if(s->obmc && s->avctx->mb_decision != FF_MB_DECISION_SIMPLE){
av_log(avctx, AV_LOG_ERROR, "OBMC is only supported with simple mb decision\n");
return -1;
}
if(s->obmc && s->codec_id != CODEC_ID_H263 && s->codec_id != CODEC_ID_H263P){
av_log(avctx, AV_LOG_ERROR, "OBMC is only supported with H263(+)\n");
return -1;
}
if(s->quarter_sample && s->codec_id != CODEC_ID_MPEG4){
av_log(avctx, AV_LOG_ERROR, "qpel not supported by codec\n");
return -1;
}
if(s->data_partitioning && s->codec_id != CODEC_ID_MPEG4){
av_log(avctx, AV_LOG_ERROR, "data partitioning not supported by codec\n");
return -1;
}
if(s->max_b_frames && s->codec_id != CODEC_ID_MPEG4 && s->codec_id != CODEC_ID_MPEG1VIDEO && s->codec_id != CODEC_ID_MPEG2VIDEO){
av_log(avctx, AV_LOG_ERROR, "b frames not supported by codec\n");
return -1;
}
if ((s->codec_id == CODEC_ID_MPEG4 || s->codec_id == CODEC_ID_H263 ||
s->codec_id == CODEC_ID_H263P) &&
(avctx->sample_aspect_ratio.num > 255 || avctx->sample_aspect_ratio.den > 255)) {
av_log(avctx, AV_LOG_ERROR, "Invalid pixel aspect ratio %i/%i, limit is 255/255\n",
avctx->sample_aspect_ratio.num, avctx->sample_aspect_ratio.den);
return -1;
}
if((s->flags & (CODEC_FLAG_INTERLACED_DCT|CODEC_FLAG_INTERLACED_ME|CODEC_FLAG_ALT_SCAN))
&& s->codec_id != CODEC_ID_MPEG4 && s->codec_id != CODEC_ID_MPEG2VIDEO){
av_log(avctx, AV_LOG_ERROR, "interlacing not supported by codec\n");
return -1;
}
if(s->mpeg_quant && s->codec_id != CODEC_ID_MPEG4){
av_log(avctx, AV_LOG_ERROR, "mpeg2 style quantization not supported by codec\n");
return -1;
}
if((s->flags & CODEC_FLAG_CBP_RD) && !avctx->trellis){
av_log(avctx, AV_LOG_ERROR, "CBP RD needs trellis quant\n");
return -1;
}
if((s->flags & CODEC_FLAG_QP_RD) && s->avctx->mb_decision != FF_MB_DECISION_RD){
av_log(avctx, AV_LOG_ERROR, "QP RD needs mbd=2\n");
return -1;
}
if(s->avctx->scenechange_threshold < 1000000000 && (s->flags & CODEC_FLAG_CLOSED_GOP)){
av_log(avctx, AV_LOG_ERROR, "closed gop with scene change detection are not supported yet, set threshold to 1000000000\n");
return -1;
}
if((s->flags2 & CODEC_FLAG2_INTRA_VLC) && s->codec_id != CODEC_ID_MPEG2VIDEO){
av_log(avctx, AV_LOG_ERROR, "intra vlc table not supported by codec\n");
return -1;
}
if(s->flags & CODEC_FLAG_LOW_DELAY){
if (s->codec_id != CODEC_ID_MPEG2VIDEO){
av_log(avctx, AV_LOG_ERROR, "low delay forcing is only available for mpeg2\n");
return -1;
}
if (s->max_b_frames != 0){
av_log(avctx, AV_LOG_ERROR, "b frames cannot be used with low delay\n");
return -1;
}
}
if(s->q_scale_type == 1){
if(s->codec_id != CODEC_ID_MPEG2VIDEO){
av_log(avctx, AV_LOG_ERROR, "non linear quant is only available for mpeg2\n");
return -1;
}
if(avctx->qmax > 12){
av_log(avctx, AV_LOG_ERROR, "non linear quant only supports qmax <= 12 currently\n");
return -1;
}
}
if(s->avctx->thread_count > 1 && s->codec_id != CODEC_ID_MPEG4
&& s->codec_id != CODEC_ID_MPEG1VIDEO && s->codec_id != CODEC_ID_MPEG2VIDEO
&& (s->codec_id != CODEC_ID_H263P || !(s->flags & CODEC_FLAG_H263P_SLICE_STRUCT))){
av_log(avctx, AV_LOG_ERROR, "multi threaded encoding not supported by codec\n");
return -1;
}
if(s->avctx->thread_count < 1){
av_log(avctx, AV_LOG_ERROR, "automatic thread number detection not supported by codec, patch welcome\n");
return -1;
}
if(s->avctx->thread_count > 1)
s->rtp_mode= 1;
if(!avctx->time_base.den || !avctx->time_base.num){
av_log(avctx, AV_LOG_ERROR, "framerate not set\n");
return -1;
}
i= (INT_MAX/2+128)>>8;
if(avctx->me_threshold >= i){
av_log(avctx, AV_LOG_ERROR, "me_threshold too large, max is %d\n", i - 1);
return -1;
}
if(avctx->mb_threshold >= i){
av_log(avctx, AV_LOG_ERROR, "mb_threshold too large, max is %d\n", i - 1);
return -1;
}
if(avctx->b_frame_strategy && (avctx->flags&CODEC_FLAG_PASS2)){
av_log(avctx, AV_LOG_INFO, "notice: b_frame_strategy only affects the first pass\n");
avctx->b_frame_strategy = 0;
}
i= av_gcd(avctx->time_base.den, avctx->time_base.num);
if(i > 1){
av_log(avctx, AV_LOG_INFO, "removing common factors from framerate\n");
avctx->time_base.den /= i;
avctx->time_base.num /= i;
}
if(s->mpeg_quant || s->codec_id==CODEC_ID_MPEG1VIDEO || s->codec_id==CODEC_ID_MPEG2VIDEO || s->codec_id==CODEC_ID_MJPEG){
s->intra_quant_bias= 3<<(QUANT_BIAS_SHIFT-3);
s->inter_quant_bias= 0;
}else{
s->intra_quant_bias=0;
s->inter_quant_bias=-(1<<(QUANT_BIAS_SHIFT-2));
}
if(avctx->intra_quant_bias != FF_DEFAULT_QUANT_BIAS)
s->intra_quant_bias= avctx->intra_quant_bias;
if(avctx->inter_quant_bias != FF_DEFAULT_QUANT_BIAS)
s->inter_quant_bias= avctx->inter_quant_bias;
avcodec_get_chroma_sub_sample(avctx->pix_fmt, &chroma_h_shift, &chroma_v_shift);
if(avctx->codec_id == CODEC_ID_MPEG4 && s->avctx->time_base.den > (1<<16)-1){
av_log(avctx, AV_LOG_ERROR, "timebase not supported by mpeg 4 standard\n");
return -1;
}
s->time_increment_bits = av_log2(s->avctx->time_base.den - 1) + 1;
switch(avctx->codec->id) {
case CODEC_ID_MPEG1VIDEO:
s->out_format = FMT_MPEG1;
s->low_delay= !!(s->flags & CODEC_FLAG_LOW_DELAY);
avctx->delay= s->low_delay ? 0 : (s->max_b_frames + 1);
break;
case CODEC_ID_MPEG2VIDEO:
s->out_format = FMT_MPEG1;
s->low_delay= !!(s->flags & CODEC_FLAG_LOW_DELAY);
avctx->delay= s->low_delay ? 0 : (s->max_b_frames + 1);
s->rtp_mode= 1;
break;
case CODEC_ID_LJPEG:
case CODEC_ID_MJPEG:
s->out_format = FMT_MJPEG;
s->intra_only = 1;
if(avctx->codec->id == CODEC_ID_LJPEG && avctx->pix_fmt == PIX_FMT_BGRA){
s->mjpeg_vsample[0] = s->mjpeg_hsample[0] =
s->mjpeg_vsample[1] = s->mjpeg_hsample[1] =
s->mjpeg_vsample[2] = s->mjpeg_hsample[2] = 1;
}else{
s->mjpeg_vsample[0] = 2;
s->mjpeg_vsample[1] = 2>>chroma_v_shift;
s->mjpeg_vsample[2] = 2>>chroma_v_shift;
s->mjpeg_hsample[0] = 2;
s->mjpeg_hsample[1] = 2>>chroma_h_shift;
s->mjpeg_hsample[2] = 2>>chroma_h_shift;
}
if (!(CONFIG_MJPEG_ENCODER || CONFIG_LJPEG_ENCODER)
|| ff_mjpeg_encode_init(s) < 0)
return -1;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_H261:
if (!CONFIG_H261_ENCODER) return -1;
if (ff_h261_get_picture_format(s->width, s->height) < 0) {
av_log(avctx, AV_LOG_ERROR, "The specified picture size of %dx%d is not valid for the H.261 codec.\nValid sizes are 176x144, 352x288\n", s->width, s->height);
return -1;
}
s->out_format = FMT_H261;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_H263:
if (!CONFIG_H263_ENCODER) return -1;
if (ff_match_2uint16(h263_format, FF_ARRAY_ELEMS(h263_format), s->width, s->height) == 8) {
av_log(avctx, AV_LOG_INFO, "The specified picture size of %dx%d is not valid for the H.263 codec.\nValid sizes are 128x96, 176x144, 352x288, 704x576, and 1408x1152. Try H.263+.\n", s->width, s->height);
return -1;
}
s->out_format = FMT_H263;
s->obmc= (avctx->flags & CODEC_FLAG_OBMC) ? 1:0;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_H263P:
s->out_format = FMT_H263;
s->h263_plus = 1;
s->umvplus = (avctx->flags & CODEC_FLAG_H263P_UMV) ? 1:0;
s->h263_aic= (avctx->flags & CODEC_FLAG_AC_PRED) ? 1:0;
s->modified_quant= s->h263_aic;
s->alt_inter_vlc= (avctx->flags & CODEC_FLAG_H263P_AIV) ? 1:0;
s->obmc= (avctx->flags & CODEC_FLAG_OBMC) ? 1:0;
s->loop_filter= (avctx->flags & CODEC_FLAG_LOOP_FILTER) ? 1:0;
s->unrestricted_mv= s->obmc || s->loop_filter || s->umvplus;
s->h263_slice_structured= (s->flags & CODEC_FLAG_H263P_SLICE_STRUCT) ? 1:0;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_FLV1:
s->out_format = FMT_H263;
s->h263_flv = 2;
s->unrestricted_mv = 1;
s->rtp_mode=0;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_RV10:
s->out_format = FMT_H263;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_RV20:
s->out_format = FMT_H263;
avctx->delay=0;
s->low_delay=1;
s->modified_quant=1;
s->h263_aic=1;
s->h263_plus=1;
s->loop_filter=1;
s->unrestricted_mv= 0;
break;
case CODEC_ID_MPEG4:
s->out_format = FMT_H263;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->low_delay= s->max_b_frames ? 0 : 1;
avctx->delay= s->low_delay ? 0 : (s->max_b_frames + 1);
break;
case CODEC_ID_MSMPEG4V1:
s->out_format = FMT_H263;
s->h263_msmpeg4 = 1;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->msmpeg4_version= 1;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_MSMPEG4V2:
s->out_format = FMT_H263;
s->h263_msmpeg4 = 1;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->msmpeg4_version= 2;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_MSMPEG4V3:
s->out_format = FMT_H263;
s->h263_msmpeg4 = 1;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->msmpeg4_version= 3;
s->flipflop_rounding=1;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_WMV1:
s->out_format = FMT_H263;
s->h263_msmpeg4 = 1;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->msmpeg4_version= 4;
s->flipflop_rounding=1;
avctx->delay=0;
s->low_delay=1;
break;
case CODEC_ID_WMV2:
s->out_format = FMT_H263;
s->h263_msmpeg4 = 1;
s->h263_pred = 1;
s->unrestricted_mv = 1;
s->msmpeg4_version= 5;
s->flipflop_rounding=1;
avctx->delay=0;
s->low_delay=1;
break;
default:
return -1;
}
avctx->has_b_frames= !s->low_delay;
s->encoding = 1;
s->progressive_frame=
s->progressive_sequence= !(avctx->flags & (CODEC_FLAG_INTERLACED_DCT|CODEC_FLAG_INTERLACED_ME|CODEC_FLAG_ALT_SCAN));
if (MPV_common_init(s) < 0)
return -1;
if(!s->dct_quantize)
s->dct_quantize = dct_quantize_c;
if(!s->denoise_dct)
s->denoise_dct = denoise_dct_c;
s->fast_dct_quantize = s->dct_quantize;
if(avctx->trellis)
s->dct_quantize = dct_quantize_trellis_c;
if((CONFIG_H263P_ENCODER || CONFIG_RV20_ENCODER) && s->modified_quant)
s->chroma_qscale_table= ff_h263_chroma_qscale_table;
s->quant_precision=5;
ff_set_cmp(&s->dsp, s->dsp.ildct_cmp, s->avctx->ildct_cmp);
ff_set_cmp(&s->dsp, s->dsp.frame_skip_cmp, s->avctx->frame_skip_cmp);
if (CONFIG_H261_ENCODER && s->out_format == FMT_H261)
ff_h261_encode_init(s);
if (CONFIG_H263_ENCODER && s->out_format == FMT_H263)
h263_encode_init(s);
if (CONFIG_MSMPEG4_ENCODER && s->msmpeg4_version)
ff_msmpeg4_encode_init(s);
if ((CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER)
&& s->out_format == FMT_MPEG1)
ff_mpeg1_encode_init(s);
for(i=0;i<64;i++) {
int j= s->dsp.idct_permutation[i];
if(CONFIG_MPEG4_ENCODER && s->codec_id==CODEC_ID_MPEG4 && s->mpeg_quant){
s->intra_matrix[j] = ff_mpeg4_default_intra_matrix[i];
s->inter_matrix[j] = ff_mpeg4_default_non_intra_matrix[i];
}else if(s->out_format == FMT_H263 || s->out_format == FMT_H261){
s->intra_matrix[j] =
s->inter_matrix[j] = ff_mpeg1_default_non_intra_matrix[i];
}else
{
s->intra_matrix[j] = ff_mpeg1_default_intra_matrix[i];
s->inter_matrix[j] = ff_mpeg1_default_non_intra_matrix[i];
}
if(s->avctx->intra_matrix)
s->intra_matrix[j] = s->avctx->intra_matrix[i];
if(s->avctx->inter_matrix)
s->inter_matrix[j] = s->avctx->inter_matrix[i];
}
if (s->out_format != FMT_MJPEG) {
ff_convert_matrix(&s->dsp, s->q_intra_matrix, s->q_intra_matrix16,
s->intra_matrix, s->intra_quant_bias, avctx->qmin, 31, 1);
ff_convert_matrix(&s->dsp, s->q_inter_matrix, s->q_inter_matrix16,
s->inter_matrix, s->inter_quant_bias, avctx->qmin, 31, 0);
}
if(ff_rate_control_init(s) < 0)
return -1;
return 0;
}
| 1threat |
Using latest JavaScript features in TypeScript, such as ES2018 : <p>I have tried searching through TypeScripts documentation on their configurtion and can't seem to find the answer to what should be a simple question.</p>
<p>Simply, how does one configure the typescript compiler so that it knows what JavaScript feature sets we are using?</p>
<p>So for example, ES2019 lands and i think 'Ohh want to get me some of that'. In that situation what do i need to upgrade, to allow the compiler to transpile and pollyfill what it needs to? </p>
<p>The lib option in the tsconfig confuses me and the docs don't explain much about the available libraries. I can't find anything on them directly either.</p>
<p>So lets say ES2019 comes out and i add the lib option for it (Assuming there will be one). Does that mean i can now use ES2019 features? If i wish to support everything from ES2019 down do i need to add the libs for every other version below it? Or does adding the ES2019 lib provide all i need? </p>
<p>Where do those libraries come from? Are they part of the core TypeScript libarary and so to get more i have to upgrade, or can i simply upgrade a seperate package and it will all work?</p>
<p>Finally do those lib provide every needed to fully support that version of the spec. Or is it a subset of features?</p>
<p>In our project we currently use TypeScript Version 2.5.3</p>
<p>I realize thats a whole lot of questions so any information on anything, or links to documentation, would be greatly appreciated.</p>
| 0debug |
int ff_mpeg4_decode_video_packet_header(MpegEncContext *s)
{
int mb_num_bits= av_log2(s->mb_num - 1) + 1;
int header_extension=0, mb_num, len;
if( get_bits_count(&s->gb) > s->gb.size_in_bits-20) return -1;
for(len=0; len<32; len++){
if(get_bits1(&s->gb)) break;
}
if(len!=ff_mpeg4_get_video_packet_prefix_length(s)){
av_log(s->avctx, AV_LOG_ERROR, "marker does not match f_code\n");
return -1;
}
if(s->shape != RECT_SHAPE){
header_extension= get_bits1(&s->gb);
}
mb_num= get_bits(&s->gb, mb_num_bits);
if(mb_num>=s->mb_num){
av_log(s->avctx, AV_LOG_ERROR, "illegal mb_num in video packet (%d %d) \n", mb_num, s->mb_num);
return -1;
}
if(s->pict_type == AV_PICTURE_TYPE_B){
int mb_x = 0, mb_y = 0;
while (s->next_picture.mbskip_table[s->mb_index2xy[mb_num]]) {
if (!mb_x)
ff_thread_await_progress(&s->next_picture_ptr->tf, mb_y++, 0);
mb_num++;
if (++mb_x == s->mb_width) mb_x = 0;
}
if(mb_num >= s->mb_num) return -1;
}
s->mb_x= mb_num % s->mb_width;
s->mb_y= mb_num / s->mb_width;
if(s->shape != BIN_ONLY_SHAPE){
int qscale= get_bits(&s->gb, s->quant_precision);
if(qscale)
s->chroma_qscale=s->qscale= qscale;
}
if(s->shape == RECT_SHAPE){
header_extension= get_bits1(&s->gb);
}
if(header_extension){
int time_incr=0;
while (get_bits1(&s->gb) != 0)
time_incr++;
check_marker(&s->gb, "before time_increment in video packed header");
skip_bits(&s->gb, s->time_increment_bits);
check_marker(&s->gb, "before vop_coding_type in video packed header");
skip_bits(&s->gb, 2);
if(s->shape != BIN_ONLY_SHAPE){
skip_bits(&s->gb, 3);
if(s->pict_type == AV_PICTURE_TYPE_S && s->vol_sprite_usage==GMC_SPRITE){
mpeg4_decode_sprite_trajectory(s, &s->gb);
av_log(s->avctx, AV_LOG_ERROR, "untested\n");
}
if (s->pict_type != AV_PICTURE_TYPE_I) {
int f_code = get_bits(&s->gb, 3);
if(f_code==0){
av_log(s->avctx, AV_LOG_ERROR, "Error, video packet header damaged (f_code=0)\n");
}
}
if (s->pict_type == AV_PICTURE_TYPE_B) {
int b_code = get_bits(&s->gb, 3);
if(b_code==0){
av_log(s->avctx, AV_LOG_ERROR, "Error, video packet header damaged (b_code=0)\n");
}
}
}
}
return 0;
}
| 1threat |
How to cleanly reboot core OS when "Failed to talk to init daemon" is seen? : <p>How do I cleanly reboot my coreOS after the following issue shows up?</p>
<pre><code>core@node2 ~ $ sudo reboot
Failed to talk to init daemon
core@node2 ~ $ sudo shutdown -r now
Failed to talk to init daemon.
core@node2 ~ $ sudo systemctl reboot
Failed to get D-Bus connection: Operation not permitted
core@contiv-node2 ~ $ shutdown
Must be root.
core@node2 ~ $ sudo shutdown
Unable to perform operation without bus connection.
core@node2 ~ $ cat /etc/lsb-release
DISTRIB_ID=CoreOS
DISTRIB_RELEASE=991.2.0
DISTRIB_CODENAME="Coeur Rouge"
DISTRIB_DESCRIPTION="CoreOS 991.2.0 (Coeur Rouge)"
</code></pre>
| 0debug |
static void virtio_s390_notify(DeviceState *d, uint16_t vector)
{
VirtIOS390Device *dev = to_virtio_s390_device_fast(d);
uint64_t token = s390_virtio_device_vq_token(dev, vector);
S390CPU *cpu = s390_cpu_addr2state(0);
s390_virtio_irq(cpu, 0, token);
}
| 1threat |
Filter values from a list based on priority : <p>I have a list of valid values for a type:</p>
<pre><code>Set<String> validTypes = ImmutableSet.of("TypeA", "TypeB", "TypeC");
</code></pre>
<p>From a given list I want to extract the first value which has a valid type. In this scenario I would write something of this sort:</p>
<pre><code>public class A{
private String type;
private String member;
}
List<A> classAList;
classAList.stream()
.filter(a -> validTypes.contains(a.getType()))
.findFirst();
</code></pre>
<p>However I would like to give preference to <code>TypeA</code>, i.e. if <code>classAList</code> has <code>TypeA</code> and <code>TypeB</code>, I want the object which has <code>typeA</code>. To do this one approach I've is:</p>
<pre><code>Set<String> preferredValidTypes = ImmutableSet.of("TypeA");
classAList.stream()
.filter(a -> preferredValidTypes.contains(a.getType()))
.findFirst()
.orElseGet(() -> {
return classAList.stream()
.filter(a -> validTypes.contains(a.getType()))
.findFirst();
}
</code></pre>
<p>Is there a better approach? </p>
| 0debug |
static pid_t qtest_qemu_pid(QTestState *s)
{
FILE *f;
char buffer[1024];
pid_t pid = -1;
f = fopen(s->pid_file, "r");
if (f) {
if (fgets(buffer, sizeof(buffer), f)) {
pid = atoi(buffer);
}
fclose(f);
}
return pid;
}
| 1threat |
display string as a multiple tag in a single textview : first thing that i know how to split string and my question is not about splitting comma or space.
i have string like this,
"hello,nice,owesome"
and i wanted to display like this:
[![enter image description here][1]][1]
this is how i saperate my string:
ArrayList<String> list = Arrays.asList(str.split(","));
now i have saperated greeting list but i dont know how to display those list as multiple tag in a single textview.
[1]: https://i.stack.imgur.com/mlQ8Q.png | 0debug |
static FFPsyWindowInfo psy_lame_window(FFPsyContext *ctx,
const int16_t *audio, const int16_t *la,
int channel, int prev_type)
{
AacPsyContext *pctx = (AacPsyContext*) ctx->model_priv_data;
AacPsyChannel *pch = &pctx->ch[channel];
int grouping = 0;
int uselongblock = 1;
int attacks[AAC_NUM_BLOCKS_SHORT + 1] = { 0 };
int i;
FFPsyWindowInfo wi;
memset(&wi, 0, sizeof(wi));
if (la) {
float hpfsmpl[AAC_BLOCK_SIZE_LONG];
float const *pf = hpfsmpl;
float attack_intensity[(AAC_NUM_BLOCKS_SHORT + 1) * PSY_LAME_NUM_SUBBLOCKS];
float energy_subshort[(AAC_NUM_BLOCKS_SHORT + 1) * PSY_LAME_NUM_SUBBLOCKS];
float energy_short[AAC_NUM_BLOCKS_SHORT + 1] = { 0 };
int chans = ctx->avctx->channels;
const int16_t *firbuf = la + (AAC_BLOCK_SIZE_SHORT/4 - PSY_LAME_FIR_LEN) * chans;
int j, att_sum = 0;
for (i = 0; i < AAC_BLOCK_SIZE_LONG; i++) {
float sum1, sum2;
sum1 = firbuf[(i + ((PSY_LAME_FIR_LEN - 1) / 2)) * chans];
sum2 = 0.0;
for (j = 0; j < ((PSY_LAME_FIR_LEN - 1) / 2) - 1; j += 2) {
sum1 += psy_fir_coeffs[j] * (firbuf[(i + j) * chans] + firbuf[(i + PSY_LAME_FIR_LEN - j) * chans]);
sum2 += psy_fir_coeffs[j + 1] * (firbuf[(i + j + 1) * chans] + firbuf[(i + PSY_LAME_FIR_LEN - j - 1) * chans]);
}
hpfsmpl[i] = sum1 + sum2;
}
for (i = 0; i < PSY_LAME_NUM_SUBBLOCKS; i++) {
energy_subshort[i] = pch->prev_energy_subshort[i + ((AAC_NUM_BLOCKS_SHORT - 1) * PSY_LAME_NUM_SUBBLOCKS)];
assert(pch->prev_energy_subshort[i + ((AAC_NUM_BLOCKS_SHORT - 2) * PSY_LAME_NUM_SUBBLOCKS + 1)] > 0);
attack_intensity[i] = energy_subshort[i] / pch->prev_energy_subshort[i + ((AAC_NUM_BLOCKS_SHORT - 2) * PSY_LAME_NUM_SUBBLOCKS + 1)];
energy_short[0] += energy_subshort[i];
}
for (i = 0; i < AAC_NUM_BLOCKS_SHORT * PSY_LAME_NUM_SUBBLOCKS; i++) {
float const *const pfe = pf + AAC_BLOCK_SIZE_LONG / (AAC_NUM_BLOCKS_SHORT * PSY_LAME_NUM_SUBBLOCKS);
float p = 1.0f;
for (; pf < pfe; pf++)
if (p < fabsf(*pf))
p = fabsf(*pf);
pch->prev_energy_subshort[i] = energy_subshort[i + PSY_LAME_NUM_SUBBLOCKS] = p;
energy_short[1 + i / PSY_LAME_NUM_SUBBLOCKS] += p;
if (p > energy_subshort[i + 1])
p = p / energy_subshort[i + 1];
else if (energy_subshort[i + 1] > p * 10.0f)
p = energy_subshort[i + 1] / (p * 10.0f);
else
p = 0.0;
attack_intensity[i + PSY_LAME_NUM_SUBBLOCKS] = p;
}
for (i = 0; i < (AAC_NUM_BLOCKS_SHORT + 1) * PSY_LAME_NUM_SUBBLOCKS; i++)
if (!attacks[i / PSY_LAME_NUM_SUBBLOCKS])
if (attack_intensity[i] > pch->attack_threshold)
attacks[i / PSY_LAME_NUM_SUBBLOCKS] = (i % PSY_LAME_NUM_SUBBLOCKS) + 1;
for (i = 1; i < AAC_NUM_BLOCKS_SHORT + 1; i++) {
float const u = energy_short[i - 1];
float const v = energy_short[i];
float const m = FFMAX(u, v);
if (m < 40000) {
if (u < 1.7f * v && v < 1.7f * u) {
if (i == 1 && attacks[0] < attacks[i])
attacks[0] = 0;
attacks[i] = 0;
}
}
att_sum += attacks[i];
}
if (attacks[0] <= pch->prev_attack)
attacks[0] = 0;
att_sum += attacks[0];
if (pch->prev_attack == 3 || att_sum) {
uselongblock = 0;
if (attacks[1] && attacks[0])
attacks[1] = 0;
if (attacks[2] && attacks[1])
attacks[2] = 0;
if (attacks[3] && attacks[2])
attacks[3] = 0;
if (attacks[4] && attacks[3])
attacks[4] = 0;
if (attacks[5] && attacks[4])
attacks[5] = 0;
if (attacks[6] && attacks[5])
attacks[6] = 0;
if (attacks[7] && attacks[6])
attacks[7] = 0;
if (attacks[8] && attacks[7])
attacks[8] = 0;
}
} else {
uselongblock = !(prev_type == EIGHT_SHORT_SEQUENCE);
}
lame_apply_block_type(pch, &wi, uselongblock);
wi.window_type[1] = prev_type;
if (wi.window_type[0] != EIGHT_SHORT_SEQUENCE) {
wi.num_windows = 1;
wi.grouping[0] = 1;
if (wi.window_type[0] == LONG_START_SEQUENCE)
wi.window_shape = 0;
else
wi.window_shape = 1;
} else {
int lastgrp = 0;
wi.num_windows = 8;
wi.window_shape = 0;
for (i = 0; i < 8; i++) {
if (!((pch->next_grouping >> i) & 1))
lastgrp = i;
wi.grouping[lastgrp]++;
}
}
for (i = 0; i < 9; i++) {
if (attacks[i]) {
grouping = i;
break;
}
}
pch->next_grouping = window_grouping[grouping];
pch->prev_attack = attacks[8];
return wi;
}
| 1threat |
static void virtio_scsi_request_cancelled(SCSIRequest *r)
{
VirtIOSCSIReq *req = r->hba_private;
if (!req) {
return;
}
if (req->dev->resetting) {
req->resp.cmd->response = VIRTIO_SCSI_S_RESET;
} else {
req->resp.cmd->response = VIRTIO_SCSI_S_ABORTED;
}
virtio_scsi_complete_cmd_req(req);
}
| 1threat |
C++ organize flow of functions : <p>I'm writing a program for a microcontroller in C++, and I need to write a function to input some numbers trough a computer connected to it.</p>
<p>This function should perform many different and well-defined tasks, e.g.: obtain data (characters) from the computer, check if the characters are valid, transform the characters in actual numbers, and many others. Written as a single function it would be at least 500 lines long. So I'll write a group of shorter functions and one "main" function that calls the others in the only meaningful order. Those functions will never be called in the rest of the code (except of course the main function). One last thing - the functions need to pass each other quite a lot of variables.</p>
<hr>
<p>What is the best way to organize those functions? My first tough was to create a class with only the "main" function in the public section and the other functions and the variables shared by different functions as private members, but I was wondering if this is good practice: I think it doesn't respect the C++ concept of "class"... for example to use this "group of functions" I would need to do something like that:</p>
<pre><code>class GetNumbers {
public:
//using the constructor as what I called "main" function
GetNumbers(int arg1, char arg2) {
performFirstAction();
performSecondAction();
...
}
private:
performFirstAction() {...};
performSecondAction() {...};
...
bool aSharedVariable;
int anotherVariable;
...
};
</code></pre>
<p>And where I actually need to input those numbers from the computer:</p>
<pre><code>GetNumbers thisMakesNoSenseInMyOpinion (x,y);
</code></pre>
<p>Making the "main" function a normal class method (and not the constructor) seems to be even worse:</p>
<pre><code>GetNumbers howCanICallThis;
howCanICallThis.getNumbers(x,y);
...
//somewhere else in the same scope
howCanICallThis.getNumbers(r,s);
</code></pre>
| 0debug |
How to load image, convert it to jpg format and resize it : <p>I need to load an image from the computer, then convert it to jpg/jpeg and resize it to 60x60 and then send it via sockets. I know how to send an image but I don't know how to do the image processing in c sharp (windows forms)...</p>
| 0debug |
How is "Target Groups" different from "Auto-Scaling Groups" in AWS? : <p>I'm a little too confused on the terms and its usage. Can you please help me understand how are these used with Load Balancers?</p>
<p>I referred the <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html" rel="noreferrer">aws-doc</a> in vain for this :(</p>
| 0debug |
Does Kotlin have a syntax for Map literals? : <p>In JavaScript: <code>{foo: bar, biz: qux}</code>.</p>
<p>In Ruby: <code>{foo => bar, biz => qux}</code>.</p>
<p>In Java:</p>
<pre><code>HashMap<K, V> map = new HashMap<>();
map.put(foo, bar);
map.put(biz, qux);
</code></pre>
<p>Surely Kotlin can do better than Java?</p>
| 0debug |
static void tight_send_compact_size(VncState *vs, size_t len)
{
int lpc = 0;
int bytes = 0;
char buf[3] = {0, 0, 0};
buf[bytes++] = len & 0x7F;
if (len > 0x7F) {
buf[bytes-1] |= 0x80;
buf[bytes++] = (len >> 7) & 0x7F;
if (len > 0x3FFF) {
buf[bytes-1] |= 0x80;
buf[bytes++] = (len >> 14) & 0xFF;
}
}
for (lpc = 0; lpc < bytes; lpc++) {
vnc_write_u8(vs, buf[lpc]);
}
}
| 1threat |
Is there a way to create an unordered from an array in Javascript? : I want to use pure html and javascript instead of HAML.
Before when I was working with this form project I loop through the array like this
- @item1 = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
- item1.each_with_index do |item, i|
%li.option
%input.option-input{name: 'options', type: 'radio', value: i, id: 'options-#{i}"}/
%label.option-label{:for => "options-#{i}"}= item1 | 0debug |
static uint64_t fw_cfg_comb_read(void *opaque, hwaddr addr,
unsigned size)
{
return fw_cfg_read(opaque);
}
| 1threat |
SQL Pivot or something else ? : <p>Situation is little bit complicated. I have table with next structure and data:</p>
<pre><code>+--------------+--------------+-------------+
| Direction | Denomination | Den_Count |
+--------------+--------------+-------------+
| OUT | 100 | 54 |
| OUT | 200 | 56 |
| IN | 1000 | 75 |
| IN | 2000 | 408 |
| IN | 5 | 23 |
| OUT | 10 | 39 |
+--------------+--------------+-------------+
</code></pre>
<p>For a purpose of creating csv files for future extraction I need to have output like this:</p>
<pre><code>+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 100 | NULL| 200 | NULL| 500 | NULL| 1000| NULL| 2000| NULL|
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----|
| IN | OUT | IN | OUT | IN | OUT | IN | OUT | IN | OUT |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----|
|1111 |1000 | 2222| 0 | 333 | 0 | 555 | 0 | 100 | 68 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----|
</code></pre>
<p>Any Idea ? I am using MS SQL Server 2014</p>
| 0debug |
How to enable Version Control window in android studio : <p>For certain of my branches, I cannot get a version control window. So for example, if I go into branch <code>develop</code>, then the window shows; if I go into <code>master</code> it’s gone and there is nothing I can do to bring it back. I try to trick it by going from develop to master. But as soon as I get to master, it is gone again. The problem is relatively new (7 days). It didn’t use to be like that. Any ideas how I might fix it?</p>
<p>I usually use the window for easy access to my log of commits</p>
<p><a href="https://i.stack.imgur.com/VhllQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VhllQ.png" alt="enter image description here"></a></p>
| 0debug |
void tb_check_watchpoint(CPUState *cpu)
{
TranslationBlock *tb;
tb = tb_find_pc(cpu->mem_io_pc);
if (!tb) {
cpu_abort(cpu, "check_watchpoint: could not find TB for pc=%p",
(void *)cpu->mem_io_pc);
}
cpu_restore_state_from_tb(cpu, tb, cpu->mem_io_pc);
tb_phys_invalidate(tb, -1);
}
| 1threat |
React Native - Fetch POST request is sending as GET request : <p>I'm having issues when using FETCH.</p>
<p>I am trying to make a POST request using FETCH in react-native.</p>
<pre><code> fetch("http://www.example.co.uk/login", {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({
username: 'test',
password: 'test123',
})
})
.then((response) => response.json())
.then((responseData) => {
console.log(
"POST Response",
"Response Body -> " + JSON.stringify(responseData)
)
})
.done();
}
</code></pre>
<p>When I inspect this call using Charles it is recorded as a GET request and the username and password that should be in the body are not there.</p>
<p><a href="https://i.stack.imgur.com/4bITU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4bITU.png" alt="enter image description here"></a></p>
<p>Can anyone help with this issue?</p>
| 0debug |
C code for this pattern 1 22 22 333 333 333 : I want to write a code in C for this pattern:
1
22
22
333
333
333
..and I need help. Thank you! :)
This is what I've tried:
int n,i,j;
scanf("%d",&n);
for(i=1;i<=n;i++)
{printf("\n");
for(j=1;j<=i;j++)
{
printf("%d",i);
}
} | 0debug |
MKSCALE16(scale16be, AV_RB16, AV_WB16)
MKSCALE16(scale16le, AV_RL16, AV_WL16)
static int raw_decode(AVCodecContext *avctx, void *data, int *got_frame,
AVPacket *avpkt)
{
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt);
RawVideoContext *context = avctx->priv_data;
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
int avpkt_stride = avpkt->size / avctx->height;
int linesize_align = 4;
int res, len;
int need_copy;
AVFrame *frame = data;
if ((avctx->bits_per_coded_sample == 8 || avctx->bits_per_coded_sample == 4
|| avctx->bits_per_coded_sample == 2 || avctx->bits_per_coded_sample == 1) &&
avctx->pix_fmt == AV_PIX_FMT_PAL8 &&
(!avctx->codec_tag || avctx->codec_tag == MKTAG('r','a','w',' '))) {
context->is_1_2_4_8_bpp = 1;
context->frame_size = av_image_get_buffer_size(avctx->pix_fmt,
FFALIGN(avctx->width, 16),
avctx->height, 1);
} else {
context->is_lt_16bpp = av_get_bits_per_pixel(desc) == 16 && avctx->bits_per_coded_sample && avctx->bits_per_coded_sample < 16;
context->frame_size = av_image_get_buffer_size(avctx->pix_fmt, avctx->width,
avctx->height, 1);
}
if (context->frame_size < 0)
return context->frame_size;
need_copy = !avpkt->buf || context->is_1_2_4_8_bpp || context->is_yuv2 || context->is_lt_16bpp;
frame->pict_type = AV_PICTURE_TYPE_I;
frame->key_frame = 1;
res = ff_decode_frame_props(avctx, frame);
if (res < 0)
return res;
av_frame_set_pkt_pos (frame, avctx->internal->pkt->pos);
av_frame_set_pkt_duration(frame, avctx->internal->pkt->duration);
if (context->tff >= 0) {
frame->interlaced_frame = 1;
frame->top_field_first = context->tff;
}
if ((res = av_image_check_size(avctx->width, avctx->height, 0, avctx)) < 0)
return res;
if (need_copy)
frame->buf[0] = av_buffer_alloc(FFMAX(context->frame_size, buf_size));
else
frame->buf[0] = av_buffer_ref(avpkt->buf);
if (!frame->buf[0])
return AVERROR(ENOMEM);
if (context->is_1_2_4_8_bpp) {
int i, j, row_pix = 0;
uint8_t *dst = frame->buf[0]->data;
buf_size = context->frame_size - AVPALETTE_SIZE;
if (avctx->bits_per_coded_sample == 8) {
for (i = 0, j = 0; j < buf_size && i<avpkt->size; i++, j++) {
dst[j] = buf[i];
row_pix++;
if (row_pix == avctx->width) {
i += avpkt_stride - (i % avpkt_stride) - 1;
j += 16 - (j % 16) - 1;
row_pix = 0;
}
}
} else if (avctx->bits_per_coded_sample == 4) {
for (i = 0, j = 0; 2 * j + 1 < buf_size && i<avpkt->size; i++, j++) {
dst[2 * j + 0] = buf[i] >> 4;
dst[2 * j + 1] = buf[i] & 15;
row_pix += 2;
if (row_pix >= avctx->width) {
i += avpkt_stride - (i % avpkt_stride) - 1;
j += 8 - (j % 8) - 1;
row_pix = 0;
}
}
} else if (avctx->bits_per_coded_sample == 2) {
for (i = 0, j = 0; 4 * j + 3 < buf_size && i<avpkt->size; i++, j++) {
dst[4 * j + 0] = buf[i] >> 6;
dst[4 * j + 1] = buf[i] >> 4 & 3;
dst[4 * j + 2] = buf[i] >> 2 & 3;
dst[4 * j + 3] = buf[i] & 3;
row_pix += 4;
if (row_pix >= avctx->width) {
i += avpkt_stride - (i % avpkt_stride) - 1;
j += 4 - (j % 4) - 1;
row_pix = 0;
}
}
} else {
av_assert0(avctx->bits_per_coded_sample == 1);
for (i = 0, j = 0; 8 * j + 7 < buf_size && i<avpkt->size; i++, j++) {
dst[8 * j + 0] = buf[i] >> 7;
dst[8 * j + 1] = buf[i] >> 6 & 1;
dst[8 * j + 2] = buf[i] >> 5 & 1;
dst[8 * j + 3] = buf[i] >> 4 & 1;
dst[8 * j + 4] = buf[i] >> 3 & 1;
dst[8 * j + 5] = buf[i] >> 2 & 1;
dst[8 * j + 6] = buf[i] >> 1 & 1;
dst[8 * j + 7] = buf[i] & 1;
row_pix += 8;
if (row_pix >= avctx->width) {
i += avpkt_stride - (i % avpkt_stride) - 1;
j += 2 - (j % 2) - 1;
row_pix = 0;
}
}
}
linesize_align = 16;
buf = dst;
} else if (context->is_lt_16bpp) {
uint8_t *dst = frame->buf[0]->data;
int packed = (avctx->codec_tag & 0xFFFFFF) == MKTAG('B','I','T', 0);
int swap = avctx->codec_tag >> 24;
if (packed && swap) {
av_fast_padded_malloc(&context->bitstream_buf, &context->bitstream_buf_size, buf_size);
if (!context->bitstream_buf)
return AVERROR(ENOMEM);
if (swap == 16)
context->bbdsp.bswap16_buf(context->bitstream_buf, (const uint16_t*)buf, buf_size / 2);
else if (swap == 32)
context->bbdsp.bswap_buf(context->bitstream_buf, (const uint32_t*)buf, buf_size / 4);
else
return AVERROR_INVALIDDATA;
buf = context->bitstream_buf;
}
if (desc->flags & AV_PIX_FMT_FLAG_BE)
scale16be(avctx, dst, buf, buf_size, packed);
else
scale16le(avctx, dst, buf, buf_size, packed);
buf = dst;
} else if (need_copy) {
memcpy(frame->buf[0]->data, buf, buf_size);
buf = frame->buf[0]->data;
}
if (avctx->codec_tag == MKTAG('A', 'V', '1', 'x') ||
avctx->codec_tag == MKTAG('A', 'V', 'u', 'p'))
buf += buf_size - context->frame_size;
len = context->frame_size - (avctx->pix_fmt==AV_PIX_FMT_PAL8 ? AVPALETTE_SIZE : 0);
if (buf_size < len && ((avctx->codec_tag & 0xFFFFFF) != MKTAG('B','I','T', 0) || !need_copy)) {
av_log(avctx, AV_LOG_ERROR, "Invalid buffer size, packet size %d < expected frame_size %d\n", buf_size, len);
av_buffer_unref(&frame->buf[0]);
return AVERROR(EINVAL);
}
if ((res = av_image_fill_arrays(frame->data, frame->linesize,
buf, avctx->pix_fmt,
avctx->width, avctx->height, 1)) < 0) {
av_buffer_unref(&frame->buf[0]);
return res;
}
if (avctx->pix_fmt == AV_PIX_FMT_PAL8) {
const uint8_t *pal = av_packet_get_side_data(avpkt, AV_PKT_DATA_PALETTE,
NULL);
if (pal) {
av_buffer_unref(&context->palette);
context->palette = av_buffer_alloc(AVPALETTE_SIZE);
if (!context->palette) {
av_buffer_unref(&frame->buf[0]);
return AVERROR(ENOMEM);
}
memcpy(context->palette->data, pal, AVPALETTE_SIZE);
frame->palette_has_changed = 1;
}
}
if ((avctx->pix_fmt==AV_PIX_FMT_BGR24 ||
avctx->pix_fmt==AV_PIX_FMT_GRAY8 ||
avctx->pix_fmt==AV_PIX_FMT_RGB555LE ||
avctx->pix_fmt==AV_PIX_FMT_RGB555BE ||
avctx->pix_fmt==AV_PIX_FMT_RGB565LE ||
avctx->pix_fmt==AV_PIX_FMT_MONOWHITE ||
avctx->pix_fmt==AV_PIX_FMT_PAL8) &&
FFALIGN(frame->linesize[0], linesize_align) * avctx->height <= buf_size)
frame->linesize[0] = FFALIGN(frame->linesize[0], linesize_align);
if (avctx->pix_fmt == AV_PIX_FMT_NV12 && avctx->codec_tag == MKTAG('N', 'V', '1', '2') &&
FFALIGN(frame->linesize[0], linesize_align) * avctx->height +
FFALIGN(frame->linesize[1], linesize_align) * ((avctx->height + 1) / 2) <= buf_size) {
int la0 = FFALIGN(frame->linesize[0], linesize_align);
frame->data[1] += (la0 - frame->linesize[0]) * avctx->height;
frame->linesize[0] = la0;
frame->linesize[1] = FFALIGN(frame->linesize[1], linesize_align);
}
if ((avctx->pix_fmt == AV_PIX_FMT_PAL8 && buf_size < context->frame_size) ||
(desc->flags & AV_PIX_FMT_FLAG_PSEUDOPAL)) {
frame->buf[1] = av_buffer_ref(context->palette);
if (!frame->buf[1]) {
av_buffer_unref(&frame->buf[0]);
return AVERROR(ENOMEM);
}
frame->data[1] = frame->buf[1]->data;
}
if (avctx->pix_fmt == AV_PIX_FMT_BGR24 &&
((frame->linesize[0] + 3) & ~3) * avctx->height <= buf_size)
frame->linesize[0] = (frame->linesize[0] + 3) & ~3;
if (context->flip)
flip(avctx, frame);
if (avctx->codec_tag == MKTAG('Y', 'V', '1', '2') ||
avctx->codec_tag == MKTAG('Y', 'V', '1', '6') ||
avctx->codec_tag == MKTAG('Y', 'V', '2', '4') ||
avctx->codec_tag == MKTAG('Y', 'V', 'U', '9'))
FFSWAP(uint8_t *, frame->data[1], frame->data[2]);
if (avctx->codec_tag == AV_RL32("I420") && (avctx->width+1)*(avctx->height+1) * 3/2 == buf_size) {
frame->data[1] = frame->data[1] + (avctx->width+1)*(avctx->height+1) -avctx->width*avctx->height;
frame->data[2] = frame->data[2] + ((avctx->width+1)*(avctx->height+1) -avctx->width*avctx->height)*5/4;
}
if (avctx->codec_tag == AV_RL32("yuv2") &&
avctx->pix_fmt == AV_PIX_FMT_YUYV422) {
int x, y;
uint8_t *line = frame->data[0];
for (y = 0; y < avctx->height; y++) {
for (x = 0; x < avctx->width; x++)
line[2 * x + 1] ^= 0x80;
line += frame->linesize[0];
}
}
if (avctx->field_order > AV_FIELD_PROGRESSIVE) {
frame->interlaced_frame = 1;
if (avctx->field_order == AV_FIELD_TT || avctx->field_order == AV_FIELD_TB)
frame->top_field_first = 1;
}
*got_frame = 1;
return buf_size;
}
| 1threat |
Python unkown operand type - for custom class : I have a custom class in my Python code, that handles k-means clustering. The class takes some arguments to customize the clustering, however when subtracting two values from a list passed to the class, I get the following error:
TypeError: unsupported operand type(s) for -: 'KMeans' and 'KMeans'
Here is the code of my custom class:
import KMeansClusterer
from math import sqrt, fabs
from matplotlib import pyplot as plp
class ClusterCalculator:
m = 0
b = 0
sum_squared_dist = []
derivates = []
distances = []
line_coordinates = []
def __init__(self, calc_border, data):
self.calc_border = calc_border
self.data = data
def calculate_optimum_clusters(self):
self.calculate_squared_dist()
self.init_opt_line()
self.calc_distances()
self.calc_line_coordinates()
opt_clusters = self.get_optimum_clusters()
print("Evaluated", opt_clusters, "as optimum number of clusters")
return opt_clusters
def calculate_squared_dist(self):
for k in range(1, self.calc_border):
kmeans = KMeansClusterer.KMeansClusterer(k, self.data)
self.sum_squared_dist.append(kmeans.calc_custom_params(self.data, k))
def init_opt_line(self):
#here the error is thrown
self. m = (self.sum_squared_dist[0] - self.sum_squared_dist[1]) / (1 - self.calc_border)
self.b = (1 * self.sum_squared_dist[0] - self.calc_border*self.sum_squared_dist[0]) / (1 - self.calc_border)
def calc_y_value(self, x_calc):
return self.m * x_calc + self.b
def calc_line_coordinates(self):
for i in range(1, self.calc_border):
self.line_coordinates.append(self.calc_y_value(i))
def calc_distances(self):
for i in range(1, self.calc_border):
self.distances.append(sqrt(fabs(self.calc_y_value(i))))
print("For border", self.calc_border, ", calculated the following distances: \n", self.distances)
def get_optimum_clusters(self):
return self.distances.index((max(self.distances)))
def plot_results(self):
plp.plot(range(1, self.calc_border), self.sum_squared_dist, "bx-")
plp.plot(range(1, self.calc_border), self.line_coordinates, "bx-")
plp.xlabel("Number of clusters")
plp.ylabel("Sum of squared distances")
plp.show()
I append the KMeansClusterer as well, because `sum_squared_dist` is filled with values of there:
from sklearn.cluster import KMeans
from matplotlib import pyplot as plp
class KMeansClusterer:
def __init__(self, clusters, data):
self.clusters = clusters
self.data = data
def cluster(self):
kmeans = KMeans(n_clusters=self.cluster(), random_state=0).fit(self.data)
print("Clustered", len(kmeans.labels_), "GTINs")
for i, cluster_center in enumerate(kmeans.cluster_centers_):
plp.plot(cluster_center, label="Center {0}".format(i))
plp.legend(loc="best")
plp.show()
def calc_custom_params(self, data_frame, clusters):
kmeans = KMeans(n_clusters=clusters, random_state=0).fit(data_frame)
return kmeans
def cluster_without_plot(self):
return KMeans(n_clusters=self.cluster(), random_state=0).fit(self.data)
I cannot imagine why '-' should be unsupported, i trie to subtract two list values of type integer and 1 and a integer variable.
Can someone help me?
| 0debug |
alert('Hello ' + user_input); | 1threat |
get last low in linq : I have 1 table that have PK is string I can't get last row
when I use `order by code desc` code is PK
last row is **9999**
[![enter image description here][1]][1]
but real last row is **73858**
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/yIPLn.png
[2]: https://i.stack.imgur.com/9Rnwi.png
Help me please T__T
I'm from Thailand sorry if incorrect language | 0debug |
Git - Can we recover deleted commits? : <p>I am surprised, I couldn't find the answer to this on SO.</p>
<blockquote>
<p>Can we recover/restore deleted commits in git?</p>
</blockquote>
<p>For example, this is what I did:</p>
<pre><code># Remove the last commit from my local branch
$ git reset --hard HEAD~1
# Force push the delete
$ git push --force
</code></pre>
<p>Now, is there a way to get back the commit which was deleted? Does git record(log) the delete internally?</p>
| 0debug |
Is there any guarantee about the evaluation order within a pattern match? : <p>The following</p>
<pre><code>(&&) :: Bool -> Bool -> Bool
False && _ = False
True && False = False
True && True = True
</code></pre>
<p>has the desired short-circuit property <code>False && undefined ≡ False</code>. The first clause, which is non-strict in the right argument, is guaranteed to be checked before anything else is tried.</p>
<p>Apparently, it still works if I change the order and even uncurry the function</p>
<pre><code>both :: (Bool,Bool) -> Bool
both (True,False) = False
both (True, True) = True
both (False, _) = False
Prelude> both (False, undefined)
False
</code></pre>
<p>but is this actually guaranteed by the standard? Unlike with the order of clauses, the order of evaluation of the patterns is not so clear here. Can I actually be sure that matching <code>(True,False)</code> will be aborted as soon as <code>(False,_)</code> is determined, before the snd element is evaluated at all?</p>
| 0debug |
what dose 'DeprecationWarning' mean? : I run a python program and get the warning as below:
D:\programs\anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
I can't confirm what is wrong about it? shall I get some advice for it? | 0debug |
static void vncws_send_handshake_response(VncState *vs, const char* key)
{
char combined_key[WS_CLIENT_KEY_LEN + WS_GUID_LEN + 1];
unsigned char hash[SHA1_DIGEST_LEN];
size_t hash_size = sizeof(hash);
char *accept = NULL, *response = NULL;
gnutls_datum_t in;
int ret;
g_strlcpy(combined_key, key, WS_CLIENT_KEY_LEN + 1);
g_strlcat(combined_key, WS_GUID, WS_CLIENT_KEY_LEN + WS_GUID_LEN + 1);
in.data = (void *)combined_key;
in.size = WS_CLIENT_KEY_LEN + WS_GUID_LEN;
ret = gnutls_fingerprint(GNUTLS_DIG_SHA1, &in, hash, &hash_size);
if (ret == GNUTLS_E_SUCCESS && hash_size <= SHA1_DIGEST_LEN) {
accept = g_base64_encode(hash, hash_size);
}
if (accept == NULL) {
VNC_DEBUG("Hashing Websocket combined key failed\n");
vnc_client_error(vs);
return;
}
response = g_strdup_printf(WS_HANDSHAKE, accept);
vnc_client_write_buf(vs, (const uint8_t *)response, strlen(response));
g_free(accept);
g_free(response);
vs->encode_ws = 1;
vnc_init_state(vs);
}
| 1threat |
static inline int msmpeg4_decode_block(MpegEncContext * s, DCTELEM * block,
int n, int coded, const uint8_t *scan_table)
{
int level, i, last, run, run_diff;
int dc_pred_dir;
RLTable *rl;
RL_VLC_ELEM *rl_vlc;
int qmul, qadd;
if (s->mb_intra) {
qmul=1;
qadd=0;
set_stat(ST_DC);
level = msmpeg4_decode_dc(s, n, &dc_pred_dir);
#ifdef PRINT_MB
{
static int c;
if(n==0) c=0;
if(n==4) printf("%X", c);
c+= c +dc_pred_dir;
}
#endif
if (level < 0){
fprintf(stderr, "dc overflow- block: %d qscale: %d
if(s->inter_intra_pred) level=0;
else return -1;
}
if (n < 4) {
rl = &rl_table[s->rl_table_index];
if(level > 256*s->y_dc_scale){
fprintf(stderr, "dc overflow+ L qscale: %d
if(!s->inter_intra_pred) return -1;
}
} else {
rl = &rl_table[3 + s->rl_chroma_table_index];
if(level > 256*s->c_dc_scale){
fprintf(stderr, "dc overflow+ C qscale: %d
if(!s->inter_intra_pred) return -1;
}
}
block[0] = level;
run_diff = 0;
i = 0;
if (!coded) {
goto not_coded;
}
if (s->ac_pred) {
if (dc_pred_dir == 0)
scan_table = s->intra_v_scantable.permutated;
else
scan_table = s->intra_h_scantable.permutated;
} else {
scan_table = s->intra_scantable.permutated;
}
set_stat(ST_INTRA_AC);
rl_vlc= rl->rl_vlc[0];
} else {
qmul = s->qscale << 1;
qadd = (s->qscale - 1) | 1;
i = -1;
rl = &rl_table[3 + s->rl_table_index];
if(s->msmpeg4_version==2)
run_diff = 0;
else
run_diff = 1;
if (!coded) {
s->block_last_index[n] = i;
return 0;
}
if(!scan_table)
scan_table = s->inter_scantable.permutated;
set_stat(ST_INTER_AC);
rl_vlc= rl->rl_vlc[s->qscale];
}
{
OPEN_READER(re, &s->gb);
for(;;) {
UPDATE_CACHE(re, &s->gb);
GET_RL_VLC(level, run, re, &s->gb, rl_vlc, TEX_VLC_BITS, 2);
if (level==0) {
int cache;
cache= GET_CACHE(re, &s->gb);
if (s->msmpeg4_version==1 || (cache&0x80000000)==0) {
if (s->msmpeg4_version==1 || (cache&0x40000000)==0) {
if(s->msmpeg4_version!=1) LAST_SKIP_BITS(re, &s->gb, 2);
UPDATE_CACHE(re, &s->gb);
if(s->msmpeg4_version<=3){
last= SHOW_UBITS(re, &s->gb, 1); SKIP_CACHE(re, &s->gb, 1);
run= SHOW_UBITS(re, &s->gb, 6); SKIP_CACHE(re, &s->gb, 6);
level= SHOW_SBITS(re, &s->gb, 8); LAST_SKIP_CACHE(re, &s->gb, 8);
SKIP_COUNTER(re, &s->gb, 1+6+8);
}else{
int sign;
last= SHOW_UBITS(re, &s->gb, 1); SKIP_BITS(re, &s->gb, 1);
if(!s->esc3_level_length){
int ll;
if(s->qscale<8){
ll= SHOW_UBITS(re, &s->gb, 3); SKIP_BITS(re, &s->gb, 3);
if(ll==0){
if(SHOW_UBITS(re, &s->gb, 1)) printf("cool a new vlc code ,contact the ffmpeg developers and upload the file\n");
SKIP_BITS(re, &s->gb, 1);
ll=8;
}
}else{
ll=2;
while(ll<8 && SHOW_UBITS(re, &s->gb, 1)==0){
ll++;
SKIP_BITS(re, &s->gb, 1);
}
if(ll<8) SKIP_BITS(re, &s->gb, 1);
}
s->esc3_level_length= ll;
s->esc3_run_length= SHOW_UBITS(re, &s->gb, 2) + 3; SKIP_BITS(re, &s->gb, 2);
UPDATE_CACHE(re, &s->gb);
}
run= SHOW_UBITS(re, &s->gb, s->esc3_run_length);
SKIP_BITS(re, &s->gb, s->esc3_run_length);
sign= SHOW_UBITS(re, &s->gb, 1);
SKIP_BITS(re, &s->gb, 1);
level= SHOW_UBITS(re, &s->gb, s->esc3_level_length);
SKIP_BITS(re, &s->gb, s->esc3_level_length);
if(sign) level= -level;
}
#if 0
{
const int abs_level= ABS(level);
const int run1= run - rl->max_run[last][abs_level] - run_diff;
if(abs_level<=MAX_LEVEL && run<=MAX_RUN){
if(abs_level <= rl->max_level[last][run]){
fprintf(stderr, "illegal 3. esc, vlc encoding possible\n");
return DECODING_AC_LOST;
}
if(abs_level <= rl->max_level[last][run]*2){
fprintf(stderr, "illegal 3. esc, esc 1 encoding possible\n");
return DECODING_AC_LOST;
}
if(run1>=0 && abs_level <= rl->max_level[last][run1]){
fprintf(stderr, "illegal 3. esc, esc 2 encoding possible\n");
return DECODING_AC_LOST;
}
}
}
#endif
if (level>0) level= level * qmul + qadd;
else level= level * qmul - qadd;
#if 0
if(level>2048 || level<-2048){
fprintf(stderr, "|level| overflow in 3. esc\n");
return DECODING_AC_LOST;
}
#endif
i+= run + 1;
if(last) i+=192;
#ifdef ERROR_DETAILS
if(run==66)
fprintf(stderr, "illegal vlc code in ESC3 level=%d\n", level);
else if((i>62 && i<192) || i>192+63)
fprintf(stderr, "run overflow in ESC3 i=%d run=%d level=%d\n", i, run, level);
#endif
} else {
#if MIN_CACHE_BITS < 23
LAST_SKIP_BITS(re, &s->gb, 2);
UPDATE_CACHE(re, &s->gb);
#else
SKIP_BITS(re, &s->gb, 2);
#endif
GET_RL_VLC(level, run, re, &s->gb, rl_vlc, TEX_VLC_BITS, 2);
i+= run + rl->max_run[run>>7][level/qmul] + run_diff;
level = (level ^ SHOW_SBITS(re, &s->gb, 1)) - SHOW_SBITS(re, &s->gb, 1);
LAST_SKIP_BITS(re, &s->gb, 1);
#ifdef ERROR_DETAILS
if(run==66)
fprintf(stderr, "illegal vlc code in ESC2 level=%d\n", level);
else if((i>62 && i<192) || i>192+63)
fprintf(stderr, "run overflow in ESC2 i=%d run=%d level=%d\n", i, run, level);
#endif
}
} else {
#if MIN_CACHE_BITS < 22
LAST_SKIP_BITS(re, &s->gb, 1);
UPDATE_CACHE(re, &s->gb);
#else
SKIP_BITS(re, &s->gb, 1);
#endif
GET_RL_VLC(level, run, re, &s->gb, rl_vlc, TEX_VLC_BITS, 2);
i+= run;
level = level + rl->max_level[run>>7][(run-1)&63] * qmul;
level = (level ^ SHOW_SBITS(re, &s->gb, 1)) - SHOW_SBITS(re, &s->gb, 1);
LAST_SKIP_BITS(re, &s->gb, 1);
#ifdef ERROR_DETAILS
if(run==66)
fprintf(stderr, "illegal vlc code in ESC1 level=%d\n", level);
else if((i>62 && i<192) || i>192+63)
fprintf(stderr, "run overflow in ESC1 i=%d run=%d level=%d\n", i, run, level);
#endif
}
} else {
i+= run;
level = (level ^ SHOW_SBITS(re, &s->gb, 1)) - SHOW_SBITS(re, &s->gb, 1);
LAST_SKIP_BITS(re, &s->gb, 1);
#ifdef ERROR_DETAILS
if(run==66)
fprintf(stderr, "illegal vlc code level=%d\n", level);
else if((i>62 && i<192) || i>192+63)
fprintf(stderr, "run overflow i=%d run=%d level=%d\n", i, run, level);
#endif
}
if (i > 62){
i-= 192;
if(i&(~63)){
const int left= s->gb.size*8 - get_bits_count(&s->gb);
if(((i+192 == 64 && level/qmul==-1) || s->error_resilience<=1) && left>=0){
fprintf(stderr, "ignoring overflow at %d %d\n", s->mb_x, s->mb_y);
break;
}else{
fprintf(stderr, "ac-tex damaged at %d %d\n", s->mb_x, s->mb_y);
return -1;
}
}
block[scan_table[i]] = level;
break;
}
block[scan_table[i]] = level;
}
CLOSE_READER(re, &s->gb);
}
not_coded:
if (s->mb_intra) {
mpeg4_pred_ac(s, block, n, dc_pred_dir);
if (s->ac_pred) {
i = 63;
}
}
if(s->msmpeg4_version>=4 && i>0) i=63;
s->block_last_index[n] = i;
return 0;
}
| 1threat |
static int Faac_encode_frame(AVCodecContext *avctx, AVPacket *avpkt,
const AVFrame *frame, int *got_packet_ptr)
{
FaacAudioContext *s = avctx->priv_data;
int bytes_written, ret;
int num_samples = frame ? frame->nb_samples : 0;
void *samples = frame ? frame->data[0] : NULL;
if ((ret = ff_alloc_packet2(avctx, avpkt, (7 + 768) * avctx->channels))) {
av_log(avctx, AV_LOG_ERROR, "Error getting output packet\n");
return ret;
}
bytes_written = faacEncEncode(s->faac_handle, samples,
num_samples * avctx->channels,
avpkt->data, avpkt->size);
if (bytes_written < 0) {
av_log(avctx, AV_LOG_ERROR, "faacEncEncode() error\n");
return bytes_written;
}
if (frame) {
if ((ret = ff_af_queue_add(&s->afq, frame)) < 0)
return ret;
}
if (!bytes_written)
return 0;
ff_af_queue_remove(&s->afq, avctx->frame_size, &avpkt->pts,
&avpkt->duration);
avpkt->size = bytes_written;
*got_packet_ptr = 1;
return 0;
}
| 1threat |
Compiling Visual Basic Project With Microsoft.CodeAnalysis.Emit : I have developed following code to generate dll files using Microsoft.CodeAnalysis.Emit library. The code successfully generates dll files for C# projects. However it doesn't successfully build Visual Basic projects. It throws lot of compiler errors for VB projects which build successfully using the VS IDE. Please see the errors thrown for a basic Windows application project. Is there any specific compiler options for VB projects? Please advice how to resolve this.
class Program
{
static void Main(string[] args)
{
const string UnitTestArtifactFolder = @"c:\VSUnitTest";
string SolutionPath = @"C:\B\VBWinApp\VBWinApp\VBWinApp.vbproj";
CompileProject(SolutionPath, UnitTestArtifactFolder);
}
private static void CompileProject(string projectFilePath, string outputFolderPath)
{
using (var workspace = MSBuildWorkspace.Create())
{
var project = workspace.OpenProjectAsync(projectFilePath).Result;
Emit(project, outputFolderPath).Wait();
}
}
private static async Task Emit(Project project, string outputFolderPath)
{
Directory.CreateDirectory(outputFolderPath);
var options = GetCompilationOptions(project);
var compilation = await project.GetCompilationAsync();
compilation = compilation.WithOptions(options);
var outputFilePath = Path.Combine(outputFolderPath, Path.GetFileName(project.OutputFilePath));
var pdbFilePath = Path.ChangeExtension(outputFilePath, "pdb");
//LogInfo("Compiling the project...");
var compilationStatus = compilation.Emit(outputFilePath, pdbPath: pdbFilePath);
if (!compilationStatus.Success)
{
//LogError(compilationStatus.Diagnostics.First(k => k.WarningLevel == 0));
}
else
{
// LogInfo("Compilation successful.");
}
}
private static CompilationOptions GetCompilationOptions(Project project)
{
if (project.Language == "C#")
{
//LogInfo("Using C# Compilation Options");
return new CSharpCompilationOptions
(OutputKind.DynamicallyLinkedLibrary, optimizationLevel: OptimizationLevel.Debug);
}
else if (project.Language == "Visual Basic")
{
//LogInfo("Using Visual Basic Compilation Options");
return new VisualBasicCompilationOptions
(OutputKind.DynamicallyLinkedLibrary, optimizationLevel: OptimizationLevel.Debug);
}
else
{
//Throw if the language is other than C# or VB
throw new InvalidOperationException("Unsupported Language.");
}
}
}
**Compiler errors on basic Windows app.**
> [0] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(34) :
> error BC30284: sub 'OnCreateMainForm' cannot be declared 'Overrides'
> because it does not override a sub in a base
> class. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [1] C:\B\VBWinApp\VBWinApp\My Project\Settings.Designer.vb(67) : error
> BC30002: Type 'Global.VBWinApp.My.MySettings' is not
> defined. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [2] C:\B\VBWinApp\VBWinApp\My Project\Settings.Designer.vb(69) : error
> BC30456: 'VBWinApp' is not a member of
> 'Global'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [3] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(26) :
> error BC30057: Too many arguments to 'Public Overloads Sub
> New()'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [4] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(27) :
> error BC30456: 'IsSingleInstance' is not a member of
> 'MyApplication'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [5] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(28) :
> error BC30456: 'EnableVisualStyles' is not a member of
> 'MyApplication'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [6] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(29) :
> error BC30456: 'SaveMySettingsOnExit' is not a member of
> 'MyApplication'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [7] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(30) :
> error BC30456: 'ShutDownStyle' is not a member of
> 'MyApplication'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [8] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(35) :
> error BC30456: 'MainForm' is not a member of
> 'MyApplication'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [9] C:\B\VBWinApp\VBWinApp\My Project\Application.Designer.vb(35) :
> error BC30456: 'VBWinApp' is not a member of
> 'Global'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [10]C:\B\VBWinApp\VBWinApp\My Project\Settings.Designer.vb(33) : error
> BC30456: 'Application' is not a member of
> 'My'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [11]C:\B\VBWinApp\VBWinApp\My Project\Settings.Designer.vb(47) : error
> BC30456: 'Application' is not a member of
> 'My'. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [12]C:\B\VBWinApp\VBWinApp\My Project\AssemblyInfo.vb(1) : hidden
> BC50001: Unused import statement. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [13]C:\Users\xxxxx\AppData\Local\Temp\.NETFramework,Version=v4.5.2.AssemblyAttributes.vb(4)
> : hidden BC50001: Unused import
> statement. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic}
> [14]C:\Users\xxxxx\AppData\Local\Temp\.NETFramework,Version=v4.5.2.AssemblyAttributes.vb(5)
> : hidden BC50001: Unused import
> statement. Microsoft.CodeAnalysis.Diagnostic
> {Microsoft.CodeAnalysis.VisualBasic.VBDiagnostic} | 0debug |
Get intersection in between grouped data on sql in performative way : In my case there is a table called ServiceCarrier as below link.
<https://i.stack.imgur.com/GIMpi.png>
I want to group it by carrier Id and want to get the intersection in between grouped ones
Appreciate your help in advance | 0debug |
change password user laravel 5.3 : <p>I want to create form with 3 field (old_password, new_password, confirm_password) with laravel 5.</p>
<p><strong>View</strong></p>
<p>old password :
<code>{!! Form::password('old_password',['class' => 'form-control']) !!}</code></p>
<p>New Password : <code>{!! Form::password('password',['class' => 'form-control']) !!}</code></p>
<p>Confirm New Password : <code>{!! Form::password('verify_password',['class' => 'form-control']) !!}</code></p>
<p><strong>Controller when user register</strong></p>
<pre><code>public function postRegister(Request $request)
{
$rules = [
'email' => 'required|email|unique:users',
'confirm_email' => 'required|same:email',
'password' => 'required|min:8|regex:/^(?=\S*[a-z])(?=\S*[!@#$&*])(?=\S*[A-Z])(?=\S*[\d])\S*$/',
'verify_password' => 'required|same:password',
];
$messages = [
'email.required' => 'email tidak boleh kosong',
'password.required' => 'password tidak boleh kosong',
'password.min' => 'Password harus minimal 8 karakter',
'password.regex' => 'Format password harus terdiri dari kombinasi huruf besar, angka dan karakter spesial (contoh:!@#$%^&*?><).',
'verify_password.required' => 'Verify Password tidak boleh kosong',
'email.email' => 'Format Email tidak valid',
'email.unique' => 'Email yang anda masukkan telah digunakan',
'verify_password.same' => 'Password tidak sama!',
];
$this->validate($request,$rules,$messages);
$newUser = $this->user->create([
'email' => $request->email,
'password' => \Hash::make($request->password),
]);
$this->activationService->sendActivationMail($newUser);
return redirect('/account/login')->with('success', 'Check your email');
}
</code></pre>
<p>I'm new in laravel, i've read some similar problem to change password in stackoverflow but it didn't help me.</p>
<p>How should I write code in my controller for change password user?.
Thanks in Advance.</p>
| 0debug |
Spigot/Bukkit API | Getting the weirdest NPE I've ever seen : <p>I'm making a plugin and I started getting a weird NullPointerException error.</p>
<p><code>Error: <a href="https://hastebin.com/umavubitem.swift" rel="nofollow noreferrer">https://hastebin.com/umavubitem.swift</a>
Full class: <a href="https://hastebin.com/maledujuju.swift" rel="nofollow noreferrer">https://hastebin.com/maledujuju.swift</a></code></p>
<p>Thank you.</p>
| 0debug |
static void ahci_start_transfer(IDEDMA *dma)
{
AHCIDevice *ad = DO_UPCAST(AHCIDevice, dma, dma);
IDEState *s = &ad->port.ifs[0];
uint32_t size = (uint32_t)(s->data_end - s->data_ptr);
uint32_t opts = le32_to_cpu(ad->cur_cmd->opts);
int is_write = opts & AHCI_CMD_WRITE;
int is_atapi = opts & AHCI_CMD_ATAPI;
int has_sglist = 0;
if (is_atapi && !ad->done_atapi_packet) {
ad->done_atapi_packet = true;
size = 0;
goto out;
}
if (!ahci_populate_sglist(ad, &s->sg, s->io_buffer_offset)) {
has_sglist = 1;
}
DPRINTF(ad->port_no, "%sing %d bytes on %s w/%s sglist\n",
is_write ? "writ" : "read", size, is_atapi ? "atapi" : "ata",
has_sglist ? "" : "o");
if (has_sglist && size) {
if (is_write) {
dma_buf_write(s->data_ptr, size, &s->sg);
} else {
dma_buf_read(s->data_ptr, size, &s->sg);
}
}
out:
s->data_ptr = s->data_end;
ahci_commit_buf(dma, size);
s->end_transfer_func(s);
if (!(s->status & DRQ_STAT)) {
ahci_write_fis_pio(ad, le32_to_cpu(ad->cur_cmd->status));
}
}
| 1threat |
Callback function not working in Axios javascript : <p>Im trying to make a register form with VueJS and Axios and Laravel.
I've written the following shortcut function to make a post request and handle laravel form validations errors:</p>
<pre><code>window.post = function (vueResource, url, data, responseCallback){
let _rsrc = vueResource;
let _data = vueResource.$data;
if(data !== ''){
_data = data;
}
axios.post(url, _data).then(responseCallback).catch(function(error){
if(error.response.status == 422){
_rsrc.errors = error.response.data.errors;
return false;
}
throw error;
});
}
</code></pre>
<p>Now when i make a post request, using the following code:</p>
<pre><code> post(this,'/api/register','',function(response){
alert(1);
});
</code></pre>
<p>The request is done, a user is created, but the callback function is not executed. There's no alert. </p>
<p>Why is this not working?</p>
| 0debug |
Android sqliteSQLiteException: near "group": syntax error (code 1): , while compiling: : public void onCreate(SQLiteDatabase db){
String CREATE_BARS_TABLE="CREATE TABLE "+ TABLE_INVENTORY+" ("+KEY_ID+" INTEGER PRIMARY KEY, "
+KEY_CATEGORY+" TEXT, "
+KEY_GROUP+" TEXT, "
+KEY_SERIAL+" TEXT, "
+KEY_BUYING_PRICE+" REAL, "
+KEY_UNIT_PRICE+" REAL, "
+KEY_DATE_ADDED+" TEXT "+")";
db.execSQL(CREATE_BARS_TABLE);
}
android.database.sqlite.SQLiteException: near "group": syntax error (code 1): , while compiling: CREATE TABLE inventory (id INTEGER PRIMARY KEY, category TEXT, group TEXT, serial TEXT, buyingprice REAL, unitprice REAL, dateAdded TEXT )
at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method) | 0debug |
void restore_boot_order(void *opaque)
{
char *normal_boot_order = opaque;
static int first = 1;
if (first) {
first = 0;
return;
}
qemu_boot_set(normal_boot_order, NULL);
qemu_unregister_reset(restore_boot_order, normal_boot_order);
g_free(normal_boot_order);
}
| 1threat |
static int vmdk_open_sparse(BlockDriverState *bs,
BlockDriverState *file, int flags,
char *buf, Error **errp)
{
uint32_t magic;
magic = ldl_be_p(buf);
switch (magic) {
case VMDK3_MAGIC:
return vmdk_open_vmfs_sparse(bs, file, flags, errp);
break;
case VMDK4_MAGIC:
return vmdk_open_vmdk4(bs, file, flags, errp);
break;
default:
return -EMEDIUMTYPE;
break;
}
}
| 1threat |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.