Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python 3 Support and Docker Update #136

Merged
merged 57 commits into from
Jul 23, 2020
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
a2bca23
Update the .travis.yml page to include a call to 'make docker' just t…
jaycedowell Aug 9, 2019
3b2b365
Updated Dockerfile.gpu to use a safe base image. Maybe we want to us…
jaycedowell Aug 9, 2019
e00c474
Moved up to Ubuntu 16.04 for the GPU Dockerfile.
jaycedowell Aug 9, 2019
c7e6e94
Trying out a new version of ctypesgen and Python3 support.
jaycedowell Aug 16, 2019
cb759db
Looks like things were moved around quite a bit in this new ctypesgen…
jaycedowell Aug 16, 2019
a57d88b
Updated the version of ctypesgen used in the Dockerfile.cpu.
jaycedowell Aug 16, 2019
fe14238
Attempting to test on an additional version of PyPy (3.5-7.0).
jaycedowell Aug 16, 2019
1e7d370
Trying a different PyPy3.5.
jaycedowell Aug 16, 2019
2593cb7
Trying yet another PyPy version for Python3 support.
jaycedowell Aug 16, 2019
08d7c53
Moved more Bifrost modules over to print_function.
jaycedowell Oct 3, 2019
d25155d
Updated the verion of pypy used for the Python2.7 testing.
jaycedowell Oct 15, 2019
c3fdbc9
Trying Ubuntu 18.04 just to see what happens.
jaycedowell Oct 18, 2019
68d5de7
Updated the Python3 version to 3.6.
jaycedowell Nov 6, 2019
de20104
Cleaned up range/xrange in a Python2 and Python3 compatible way.
jaycedowell Nov 6, 2019
fd754e6
And missed bifrost.blocks.sigproc.
jaycedowell Nov 6, 2019
41620f5
Merge branch 'master' into docker-gpu-update
jaycedowell Feb 4, 2020
e8a9f4a
Moved the Docker GPU images over to CUDA 10.2.
jaycedowell Feb 4, 2020
8125bec
Merge branch 'docker-gpu-update' of https://github.com/ledatelescope/…
jaycedowell Feb 4, 2020
5af2b74
Moved the contents of python/wrap.py into python/Makefile since it re…
jaycedowell Feb 4, 2020
64d50e4
Did that do anything?
jaycedowell Feb 4, 2020
a447d8b
Updated the setup.py script to include the tools in tools as part of …
jaycedowell Feb 4, 2020
06f75ab
Cleaned up a few compiler warnings in memory.cpp under CUDA 10+.
jaycedowell Feb 6, 2020
602908f
Actually fixed the CUDA 10.2 warnings in memory.cpp.
jaycedowell Feb 20, 2020
c0d9601
Cleaned up some compiler warnings in fir.cu.
jaycedowell Mar 18, 2020
c7a9782
Changed the Python3 logic to be Python2 logic.
jaycedowell Apr 20, 2020
f9a77e7
Started working on a Jenkins test script to see if we can get GPU tes…
jaycedowell May 16, 2020
411280e
What about this?
jaycedowell May 16, 2020
c05d033
More tweaking.
jaycedowell May 16, 2020
328d7f2
Added back in the FIR tests.
jaycedowell May 16, 2020
c95f374
@realtimeradio was kind enough to test this branch and found a few pr…
jaycedowell May 21, 2020
095738c
Merge branch 'jenkins-gpu-testing' into docker-gpu-update
jaycedowell May 21, 2020
4a5a2d3
Updated test_serialize.py.
jaycedowell May 21, 2020
2eaa2b7
Added in test_fdmt to the Jenkins tests.
jaycedowell May 21, 2020
2c2dbab
id -> id_ typos
jack-h Jun 2, 2020
1408f54
Merge pull request #141 from realtimeradio/id_fix
jaycedowell Jun 4, 2020
0b2603f
More prints that needed to be converted.
jaycedowell Jun 4, 2020
a949c43
Merge branch 'docker-gpu-update' of https://github.com/ledatelescope/…
jaycedowell Jun 4, 2020
57a2d9f
Added a -y option to download_breakthrough_listen_data.py so that it …
jaycedowell Jun 4, 2020
caa2359
More Python3 imports
jack-h Jun 4, 2020
f324f71
Py2 -> 3 changes
jack-h Jun 4, 2020
73ffc64
I have no idea why test_fft_detect.py always fails.
jaycedowell Jun 4, 2020
53fa44d
Pulled in the recent changes to master.
jaycedowell Jun 9, 2020
39d3acc
Merge branch 'docker-gpu-update' into docker-gpu-update
jaycedowell Jun 9, 2020
c94947b
Python2/3 compatibility catch
jaycedowell Jun 9, 2020
79db26c
Fixed a few path problems in test_guppi*.py and download_breakthrough…
jaycedowell Jun 9, 2020
03c3a68
I still don't understand test_fft_detect.py.
jaycedowell Jun 9, 2020
f0f3a3c
Cleaned up after test_fft*.py and added in the jenkins.sh test launcher.
jaycedowell Jun 9, 2020
4ad3286
Typo in jenkins.sh. Plus, a test to see if I can track down the rece…
jaycedowell Jun 10, 2020
9d5e40f
Merge branch 'docker-gpu-update' of https://github.com/ledatelescope/…
jack-h Jun 11, 2020
6eeb92f
Move sys import to before its first use
jack-h Jun 11, 2020
3ed8c02
Worked on fixing problems uncovered in the testbench tests.
jaycedowell Jun 12, 2020
c2b96df
A more robust fix for DataType.py.
jaycedowell Jun 12, 2020
746ffd1
Ugh.
jaycedowell Jun 12, 2020
e0fd0f8
Let's see if we can get the coverage in the testbench test counted.
jaycedowell Jun 12, 2020
fe96233
Merge pull request #142 from realtimeradio/docker-gpu-update
jaycedowell Jul 7, 2020
9de5403
Version bump.
jaycedowell Jul 21, 2020
36b42a4
Follow redirects to get to bf_test_files.tar.gz.
jaycedowell Jul 21, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@ language: python

python:
- 2.7
- 3.6
# PyPy versions
- pypy2.7-6.0
- pypy3

services:
- docker
Expand All @@ -27,6 +29,7 @@ jobs:
python: 2.7
script:
- make docker-cpu
- make docker
- bash ./.travis_deploy_docs.sh

script:
Expand All @@ -38,7 +41,7 @@ script:
simplejson \
pint \
graphviz \
git+https://github.com/davidjamesca/ctypesgen.git@3d2d9803339503d2988382aa861b47a6a4872c32 \
git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f \
coveralls \
codecov
- sudo make -j NOCUDA=1
Expand Down
7 changes: 7 additions & 0 deletions Dockerfile.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,13 @@ ARG DEBIAN_FRONTEND=noninteractive

ENV TERM xterm

# Update ctypesgen
RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py
RUN pip --no-cache-dir install \
git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f

# Build the library
WORKDIR /bifrost
COPY . .
Expand Down
4 changes: 2 additions & 2 deletions Dockerfile.gpu
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:8.0
FROM nvidia/cuda:10.2-devel-ubuntu18.04

MAINTAINER Ben Barsdell <[email protected]>

Expand Down Expand Up @@ -31,7 +31,7 @@ RUN pip --no-cache-dir install \
contextlib2 \
simplejson \
pint \
git+https://github.com/davidjamesca/ctypesgen.git@3d2d9803339503d2988382aa861b47a6a4872c32 \
git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f \
graphviz

ENV TERM xterm
Expand Down
4 changes: 2 additions & 2 deletions Dockerfile_prereq.gpu
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:8.0
FROM nvidia/cuda:10.2-devel-ubuntu18.04

MAINTAINER Ben Barsdell <[email protected]>

Expand Down Expand Up @@ -31,7 +31,7 @@ RUN pip --no-cache-dir install \
contextlib2 \
simplejson \
pint \
git+https://github.com/davidjamesca/ctypesgen.git@3d2d9803339503d2988382aa861b47a6a4872c32 \
git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f \
graphviz

ENV TERM xterm
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ print "All done"
* ctypesgen

```
$ sudo pip install numpy contextlib2 pint git+https://github.com/davidjamesca/ctypesgen.git@3d2d9803339503d2988382aa861b47a6a4872c32
$ sudo pip install numpy contextlib2 pint git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f
```

### Bifrost installation
Expand Down
2 changes: 1 addition & 1 deletion docs/source/Getting-started-guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ numpy, matplotlib, contextlib2, simplejson, pint, graphviz, ctypesgen
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you have already installed pip, this step should be as simple as
``pip install --user numpy matplotlib contextlib2 simplejson pint graphviz git+https://github.com/davidjamesca/ctypesgen.git@3d2d9803339503d2988382aa861b47a6a4872c32``.
``pip install --user numpy matplotlib contextlib2 simplejson pint graphviz git+https://github.com/olsonse/ctypesgen.git@9bd2d249aa4011c6383a10890ec6f203d7b7990f``.

C++ dependencies
~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion python/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ $(BIFROST_PYTHON_VERSION_FILE): ../config.mk
@echo "__version__ = \"$(LIBBIFROST_MAJOR).$(LIBBIFROST_MINOR).$(LIBBIFROST_PATCH)\"" > $@

define run_ctypesgen
ctypesgen.py -l$1 -I$2 $^ -o $@
python -c 'from ctypesgen import main as ctypeswrap; ctypeswrap.main()' -l$1 -I$2 $^ -o $@
# WAR for 'const char**' being generated as POINTER(POINTER(c_char)) instead of POINTER(c_char_p)
sed -i 's/POINTER(c_char)/c_char_p/g' $@
# WAR for a buggy WAR in ctypesgen that breaks type checking and auto-byref functionality
Expand Down
4 changes: 4 additions & 0 deletions python/bifrost/DataType.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@
cf32: 32+32-bit complex floating point
"""

import sys
if sys.version_info > (3,):
xrange = range

from libbifrost import _bf
import numpy as np

Expand Down
9 changes: 6 additions & 3 deletions python/bifrost/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@
"""

# TODO: Decide how to organise the namespace

from __future__ import print_function

import core, memory, affinity, ring, block, address, udp_socket
import pipeline
import device
Expand All @@ -49,9 +52,9 @@
try:
from .version import __version__
except ImportError:
print "*************************************************************************"
print "Please run `make` from the root of the source tree to generate version.py"
print "*************************************************************************"
print("*************************************************************************")
print("Please run `make` from the root of the source tree to generate version.py")
print("*************************************************************************")
raise
__author__ = "The Bifrost Authors"
__copyright__ = "Copyright (c) 2016, The Bifrost Authors. All rights reserved.\nCopyright (c) 2016, NVIDIA CORPORATION. All rights reserved."
Expand Down
12 changes: 7 additions & 5 deletions python/bifrost/addon/leda/bandfiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

from __future__ import print_function

import os, sys

sys.path.append('..')
Expand Down Expand Up @@ -83,12 +85,12 @@ def __init__(self, fname):
obs1 = extract_obs_offset_from_name(fname)
obs2 = extract_obs_offset_in_file(fname)
if obs1 != obs2 and obs1 != "UNKNOWN" and obs2 !="UNKNOWN":
print "Consistency Error", fname, ": OBS_OFFSET in file doesn't match the offset in the name"
print("Consistency Error", fname, ": OBS_OFFSET in file doesn't match the offset in the name")
"""

if hedr["SOURCE"] == "LEDA_TEST":
if not is_integer(n_scans): print "CONSISTENCY ERROR, ", fname, "scan:",n_scans, "is not integer"
if not is_integer((self.end_time-self.start_time)/9.0): print "CONSISTENCY ERROR", fname, ": not 9 sec dump in file"
if not is_integer(n_scans): print("CONSISTENCY ERROR, ", fname, "scan:",n_scans, "is not integer")
if not is_integer((self.end_time-self.start_time)/9.0): print("CONSISTENCY ERROR", fname, ": not 9 sec dump in file")

# Gather the file info for all files that have different frequency but the same observation time.
class BandFiles(object):
Expand All @@ -106,7 +108,7 @@ def __init__(self, basename):
if basename[-5:] == ".dada": # Just a single file
if os.access(basename,os.R_OK): self.files.append(FileInfo(basename))
else: "Error:", basename, "does not exist or is not readable"
print basename, FileInfo(basename), self.files
print(basename, FileInfo(basename), self.files)
else:

# Look for files in all the standard locations
Expand Down Expand Up @@ -138,7 +140,7 @@ def __init__(self, basename):
if f.start_time not in self.start_time_present: self.start_time_present.append(f.start_time)

if len(self.start_time_present) > 1:
print "Error: Files with same timestamp in their name have different internal time. Basename:",basename
print("Error: Files with same timestamp in their name have different internal time. Basename:",basename)
#sys.exit(1)

self.start_time = self.start_time_present[0]
Expand Down
21 changes: 13 additions & 8 deletions python/bifrost/addon/leda/blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,11 @@
This file contains blocks specific to LEDA-OVRO.
"""

from __future__ import print_function
import sys
if sys.version_info > (3,):
xrange = range

import os
import bandfiles
import bifrost
Expand Down Expand Up @@ -75,10 +80,10 @@ def __init__(self, time_stamp, core=-1, gulp_nframe=4096):
i += 1

# Report what we've got
print "Num files in time:", len(beamformer_scans)
print "File and number:"
print("Num files in time:", len(beamformer_scans))
print("File and number:")
for scan in beamformer_scans:
print os.path.basename(scan.files[0].name)+":", len(scan.files)
print(os.path.basename(scan.files[0].name)+":", len(scan.files))

self.beamformer_scans = beamformer_scans # List of full-band time steps

Expand All @@ -100,7 +105,7 @@ def main(self, input_rings, output_rings):
ohdr["tsamp"] = self.SAMPLING_RATE
ohdr['foff'] = self.CHANNEL_WIDTH

#print length_one_second, ring_span_size, file_chunk_size, number_of_chunks
#print(length_one_second, ring_span_size, file_chunk_size, number_of_chunks)

with self.oring.begin_writing() as oring:

Expand All @@ -109,7 +114,7 @@ def main(self, input_rings, output_rings):
# Go through the frequencies
for f in scan.files:

print "Opening", f.name
print("Opening", f.name)

with open(f.name,'rb') as ifile:
ifile.read(self.HEADER_SIZE)
Expand All @@ -120,16 +125,16 @@ def main(self, input_rings, output_rings):
self.oring.resize(ring_span_size)
with oring.begin_sequence(f.name, header=json.dumps(ohdr)) as osequence:

for i in range(number_of_seconds):
for i in xrange(number_of_seconds):
# Get a chunk of data from the file. The whole band is used, but only a chunk of time (1 second).
# Massage the data so it can go through the ring. That means changng the data type and flattening.
try:
data = np.fromfile(ifile, count=file_chunk_size, dtype=np.int8).astype(np.float32)
except:
print "Bad read. Stopping read."
print("Bad read. Stopping read.")
return
if data.size != length_one_second*self.N_BEAM*self.N_CHAN*2:
print "Bad data shape. Stopping read."
print("Bad data shape. Stopping read.")
return
data = data.reshape(length_one_second, self.N_BEAM, self.N_CHAN, 2)
power = (data[...,0]**2 + data[...,1]**2).mean(axis=1) # Now have time by frequency.
Expand Down
22 changes: 12 additions & 10 deletions python/bifrost/addon/leda/make_header.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@
Makes header.txt files that is used by corr2uvfit and DuCT.
"""

from __future__ import print_function

import numpy as np
import os, sys, ephem, datetime
from dateutil import tz
Expand Down Expand Up @@ -63,7 +65,7 @@ def __init__(self, filename, warnings, file_size):
self.filename = filename
self.warnings = warnings
self.file_size = file_size # Externally supplied
#print filename, warnings, file_size
#print(filename, warnings, file_size)
self.generate_info()

def generate_info(self):
Expand Down Expand Up @@ -113,7 +115,7 @@ def generate_info(self):
data_size_dsk = int(header["FILE_SIZE"]) # these data sizes don't include header
data_size_hdr = data_size_dsk
else: # Failure
if self.warnings: print "WARNING: File is zipped and FILE_SIZE is not in header and file_size not supplied. "
if self.warnings: print("WARNING: File is zipped and FILE_SIZE is not in header and file_size not supplied. ")
have_size = False
data_size_hdr = data_size_dsk = 0
else: # File not zipped. Can get true complete file size
Expand All @@ -122,7 +124,7 @@ def generate_info(self):
else: data_size_hdr = data_size_dsk

if data_size_hdr != data_size_dsk:
if self.warnings: print "WARNING: Data size in file doesn't match actual size. Using actual size."
if self.warnings: print("WARNING: Data size in file doesn't match actual size. Using actual size.")

data_size = data_size_dsk # Settle on this as the size of the data

Expand All @@ -135,13 +137,13 @@ def generate_info(self):

if "BYTES_PER_AVG" in header and have_size:
if data_size % bpa != 0:
if self.warnings: print "WARNING: BYTES_PER_AVG does not result in an integral number of scans"
if self.warnings: print("WARNING: BYTES_PER_AVG does not result in an integral number of scans")
if "DATA_ORDER" in header and self.data_order == 'TIME_SUBSET_CHAN_TRIANGULAR_POL_POL_COMPLEX':
if self.warnings:
print 'DATA_ORDER is TIME_SUBSET_CHAN_TRIANGULAR_POL_POL_COMPLEX, resetting BYTES_PER_AVG to',(109*32896*2*2+9*109*1270*2*2)*8,"(fixed)"
print('DATA_ORDER is TIME_SUBSET_CHAN_TRIANGULAR_POL_POL_COMPLEX, resetting BYTES_PER_AVG to',(109*32896*2*2+9*109*1270*2*2)*8,"(fixed)")
bpa = (109*32896*2*2+9*109*1270*2*2)*8
if data_size % bpa != 0 and self.warnings:
print "WARNING: BYTES_PER_AVG still doesn't give integral number of scans"
print("WARNING: BYTES_PER_AVG still doesn't give integral number of scans")

self.n_int = float(data_size) / bpa

Expand Down Expand Up @@ -213,9 +215,9 @@ def __init__(self, header):
ra, dec = ovro.radec_of(0, np.pi/2)
self.lst_str = str(float(ra) / 2 / np.pi * 24)
self.dec_str = str(float(repr(dec))*180/np.pi)
#print ("UTC START: %s"%dada_file.datestamp)
#print ("TIME OFFSET: %s"%datetime.timedelta(seconds=dada_file.t_offset))
#print ("NEW START: (%s, %s)"%(date_str, time_str))
#print("UTC START: %s"%dada_file.datestamp)
#print("TIME OFFSET: %s"%datetime.timedelta(seconds=dada_file.t_offset))
#print("NEW START: (%s, %s)"%(date_str, time_str))


def make_header(filename, write=True, warn=True, size=None):
Expand Down Expand Up @@ -286,5 +288,5 @@ def make_header(filename, write=True, warn=True, size=None):
if len(sys.argv) == 2: make_header(sys.argv[1])
elif len(sys.argv) == 3: make_header(sys.argv[1],size=sys.argv[2])
else:
print "Expecting file name and optionally file size"
print("Expecting file name and optionally file size")

Loading